Visualising Kubernetes Objects' Life Cycles

Share on:

This is a short visual guide, using UML Activity diagrams, to gain insight into the Kubernetes Probe and Deployment life cycles.

Probe Life Cycle

The fined-grained behaviour control of Probes is achieved using the initialDelaySeconds, timeoutSeconds, periodSeconds, and failureThreshold attributes. For example:

 1# Pod manifest file snippet
 2spec:
 3containers:
 4- image: ...
 5       livenessProbe:
 6         httpGet:
 7           path: /healthy
 8           port: 8080
 9         initialDelaySeconds: 5
10         timeoutSeconds: 1
11         periodSeconds: 10
12         failureThreshold: 3
13         successThreshold: 1

Let us look at each attribute, one at a time:

1initialDelaySeconds: 5

Here we say that we want to wait at least five seconds before the Probe starts. This is useful to account, for instance, for a web server that may take a while to start:

1timeoutSeconds: 1

If our service is a bit slow to respond, we may want to give it extra time. In this case, our http server on port 8080 must respond almost right away (within one second):

1periodSeconds: 10

We can’t be spamming http://localhost/healthy:8080 every nanosecond. We should run a check, be happy with the result, and come back later for another test. This attribute controls how frequently the Probe is run:

1failureThreshold: 3

Should we consider the Probe check a fail just because of one error? Surely not. This attribute controls how many failures are required to deem the container failed to the external world:

1successThreshold: 1

This is odd. What is the use of a success threshold? Well, with failureThreshold we control how many sequential failures we need to actually account for a container failure (be it in the readiness or liveness context). This works like a counter: one failure, two failures, three failures, and …. bang!. But how do we reset this counter? By counting successful Probe checks. By default, it takes just one successful result to reset the counter to zero, but we may be more pessimistic and wait for two or more.

The table below summarises the discussed attributes and shows their default and minimum values.

Probe Attribute Name Default Minimum
initialDelaySeconds n/a n/a
periodSeconds 10 1
timeoutSeconds 1 1
successThreshold 1 1
failureThreshold 1 1

We now look at how these attributes fit into the Pod life cycle in a number of key stages.

  1. Container Created: At this point, the Probe isn’t running yet. There is a wait state before transitioning to (2) set by the initialDelaySeconds attribute.
  2. Probe Started: This is when the failure and success counters are set to zero and a transition to (3) occurs.
  3. Run Probe Check: When a specific check is conducted (e.g., HTTP), a time-out counter starts which is set by the timeoutSeconds attribute. If a failure or a time-out is detected, there is a transition to (4). If there is no time-out and a success state is detected, there is a transition to (5).
  4. Failure: In this case, the failure counter is incremented, and the success counter is set to zero. Then, there is a transition to (6).
  5. Success: In this case, the success counter is incremented, and there is a transition to (6). If the success counter is greater or equal than the successThreshold attribute, the failures counter is set to zero.
  6. Determine Failure: If the failures counter is greater or equal than the value specified by the failureThreshold attribute, a Probe reports a failure—the action will depend on whether it is a readiness or a liveness Probe. Otherwise, there will be a wait state determined by the periodSeconds attribute, and then a transition to (3) will occur.

Deployment Life Cycle

Deployments may be categorised in two types: revision-tracking and scaling-only.

A revision-tracking Deployment is one that changes some aspect of the Pod’s specification, most likely the number and/or version of container images declared within. A Deployment that only alters the number of replicas—both imperatively and declaratively—does not trigger a revision. For example, issuing the kubectl scale command will not create a revision point that can be used to undo the scaling change. Returning to the precedent number of replicas involves setting the previous number again.

A scaling-only Deployment is achieved imperatively by using the kubectl scale command or by setting the deployment.spec.replicas attribute and applying the corresponding file using the kubectl apply -f <DEPLOYMENT-MANIFEST> command.

Since scaling-only Deployments are managed entirely by the ReplicaSet controller, there is no revision tracking (e.g., rollback capabilities) provided by the master Deployment controller. Objectively, although this behaviour is rather inconsistent, one can argue that altering the number of replicas is not as consequential as changing an image.

Scaling Only

A scaling-only deployment creates Pods if the number of current running Pods is higher than the replicas value, or terminates running ones if the value is lower.

Recreate Strategy

When using the Recreate strategy, all Pods are first terminated and only when there are zero running Pods, the new ones are created until the current number of running Pods is equal to the replicas value.

Rolling Update

A RollingUpdate strategy is typically one in which one Pod is updated at a time (and the load balancer is managed accordingly behind the scenes) so that the consumer does not experience downtime and resources (e.g., number of Nodes) are rationalised.

In practice, both conventional “one at a time” rolling update deployments and blue/green deployments can be achieved using the same Rolling Update mechanism by setting the deployment.spec.rollingUpdate.maxSurge and deployment.spec.rollingUpdate.maxUnavailable variables.

For now, let us contemplate the following specification:

1...
2spec:
3  replicas: 5
4  strategy:
5...
6type: RollingUpdate
7rollingUpdate:
8  maxSurge: 1
9  maxUnavailable: 0

The maxSurge and maxUnavailable values shown here allow us to tune the nature of the Rolling Update:

  • maxSurge: This property specifies the number of Pods for the target (New ReplicaSet) that must be created before the termination process on the baseline (Old ReplicaSet) starts. If this value is 1, this means that the number of replicas will remain constant during the deployment at the cost of Kubernetes allocating resources for one extra Pod at any given time.
  • maxUnavailable: This property specifies the maximum number of Nodes that may be unavailable at any given time. If this number is zero, as in the preceding example, a maxSurge value of 1 is at least required since spare resources are required to keep the number of desired replicas constant.

Please note that the maxUnavailable and maxSurge variables accept percent values. For example, in this case, 25% of Kubernetes' extra resources will be used to guarantee that that there is no decrease in the number of running replicas:

 1...
 2spec:
 3  replicas: 4
 4  strategy:
 5...
 6type: RollingUpdate
 7rollingUpdate:
 8  maxSurge: 25%
 9  maxUnavailable: 0%
10

This example is equivalent to the one initially discussed since 25% of four replicas is exactly one replica.

Further Details

Before You Leave

🤘 Subscribe to my 100% spam-free newsletter!

website counters