Leases
Distributed systems often have a need for "leases", which provides a mechanism to lock shared resources and coordinate activity between nodes.
In Kubernetes, the "lease" concept is represented by Lease
objects in the coordination.k8s.io
API group, which are used for system-critical
capabilities like node heart beats and component-level leader election.
Node Heart Beats
Kubernetes uses the Lease API to communicate kubelet node heart beats to the Kubernetes API server.
For every Node
, there is a Lease
object with a matching name in the kube-node-lease
namespace. Under the hood, every kubelet heart beat is an UPDATE request to this Lease
object, updating
the spec.renewTime
field for the Lease. The Kubernetes control plane uses the time stamp of this field
to determine the availability of this Node
.
See Node Lease objects for more details.
Leader Election
Leases are also used in Kubernetes to ensure only one instance of a component is running at any given time.
This is used by control plane components like kube-controller-manager
and kube-scheduler
in
HA configurations, where only one instance of the component should be actively running while the other
instances are on stand-by.
API Server Identity
Kubernetes v1.26 [beta]
Starting in Kubernetes v1.26, each kube-apiserver
uses the Lease API to publish its identity to the
rest of the system. While not particularly useful on its own, this provides a mechanism for clients to
discover how many instances of kube-apiserver
are operating the Kubernetes control plane.
Existence of kube-apiserver leases enables future capabilities that may require coordination between
each kube-apiserver.
You can inspect Leases owned by each kube-apiserver by checking for lease objects in the kube-system
namespace
with the name kube-apiserver-<sha256-hash>
. Alternatively you can use the label selector k8s.io/component=kube-apiserver
:
$ kubectl -n kube-system get lease -l k8s.io/component=kube-apiserver
NAME HOLDER AGE
kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a_9cbf54e5-1136-44bd-8f9a-1dcd15c346b4 5m33s
kube-apiserver-dz2dqprdpsgnm756t5rnov7yka kube-apiserver-dz2dqprdpsgnm756t5rnov7yka_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s
kube-apiserver-fyloo45sdenffw2ugwaz3likua kube-apiserver-fyloo45sdenffw2ugwaz3likua_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s
The SHA256 hash used in the lease name is based on the OS hostname as seen by kube-apiserver. Each kube-apiserver should be
configured to use a hostname that is unique within the cluster. New instances of kube-apiserver that use the same hostname
will take over existing Leases using a new holder identity, as opposed to instantiating new lease objects. You can check the
hostname used by kube-apisever by checking the value of the kubernetes.io/hostname
label:
$ kubectl -n kube-system get lease kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a -o yaml
apiVersion: coordination.k8s.io/v1
kind: Lease
metadata:
creationTimestamp: "2022-11-30T15:37:15Z"
labels:
k8s.io/component: kube-apiserver
kubernetes.io/hostname: kind-control-plane
name: kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a
namespace: kube-system
resourceVersion: "18171"
uid: d6c68901-4ec5-4385-b1ef-2d783738da6c
spec:
holderIdentity: kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a_9cbf54e5-1136-44bd-8f9a-1dcd15c346b4
leaseDurationSeconds: 3600
renewTime: "2022-11-30T18:04:27.912073Z"
Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.