Using KubeAssert Effectively

MorningSpace
5 min readJun 14, 2021

KubeAssert is a kubectl plugin used to make assertions against resources on your Kubernetes cluster from command line. It is an open source project that I created on GitHub.

As the second post of KubeAssert series, in this post, I will share some tips and tricks with you on how to use KubeAssert effectively when you manage your Kubernetes cluster on a daily basis.

Using Multiple Label and Field Selectors

When use assertion such as exist or not-exist to validate the existence of Kubernetes resource, you can use label selector and/or field selector to filter on the query results returned from the cluster that you are working with. For example, to assert pod with label app equal to echo running in a namespace called foo, you can use below assertion that specifies both the label selector and the field selector:

kubectl assert exist pods \
-l app=echo --field-selector status.phase=Running -n foo

You can also apply multiple label and/or field selectors in the same assertion as needed. For example, to assert the running pods that have multiple labels in a namespace, you can write the assertion as below:

kubectl assert exist pods -l app=echo -l component=proxy \
--field-selector metadata.namespace==foo \
--field-selector status.phase=Running \
--all-namespaces

Alternatively, you can also use comma separated list to specify more than one requirements using a single -l or --field-selector option. For example, the below assertion has the same effect as above one but is more compact:

kubectl assert exist pods -l app=echo,component=proxy \
--field-selector metadata.namespace==foo,status.phase=Running \
--all-namespaces

Using Enhanced Field Selector

When assert the existence of Kubernetes resource using exist or not-exist assertion, although it allows you to filter on query results by specifying field selector, the support of field selector is very limited. It is because both exist and not-exist use kubectl get to query the resource underneath, and they directly use the native --field-selector option provided by kubectl. But per Kubernetes documentation, filtering by fields actually happens on server side, and the server only supports a limited number of field queries per type. As an example, when assert pods, we can run query by some fields under status using field selector. But this will not work for deployments:

kubectl assert exist deployments -l app=echo --field-selector status.replicas=1
ASSERT deployments matching label criteria 'app=echo' and field criteria 'status.replicas=1' should exist.
Error from server (BadRequest): Unable to find "extensions/v1beta1, Resource=deployments" that match label selector "app=echo", field selector "status.replicas=1": "status.replicas" is not a known field selector: only "metadata.name", "metadata.namespace"
ASSERT FAIL Error getting resource(s).

Because of this, there are two additional assertions, exist-enhanced and not-exist-enhanced, which provide the same functionality but with an enhanced field selector support. So, the above assertion can be modified as below:

kubectl assert exist-enhanced deployments -l app=echo --field-selector status.replicas=1
ASSERT deployments matching label criteria 'app=echo' and field criteria 'status.replicas=1' should exist.
INFO Found 1 resource(s).
NAME NAMESPACE COL0
echo default 1
ASSERT PASS

The native field selector supports operator =, ==, and != (= and == mean the same thing), while the enhanced field selector even supports regex match using =~. This makes it much more flexible and powerful when you define the field selector. Here are some examples:

  • To assert service accounts in foo namespace should include a specified secret:
kubectl assert exist-enhanced serviceaccounts --field-selector 'secrets[*].name=~my-secret' -n foo
  • To assert a custom resource at least has one condition element under status where the value of type field should be Deployed:
kubectl assert exist-enhanced MyResources --field-selector 'status.conditions[*].type=~Deployed'
  • To assert a custom resource where all instances names for this type of resource should start with text that falls into a specified list:
kubectl assert exist-enhanced MyResource --field-selector metadata.name=~'foo.*|bar.*|baz.*'

Validate Pods Status

Although it is possible to assert pods status using assertion exist, not-exist, exist-enhanced, and not-exist-enhanced, it can be complicated when you try to write the assertion for this in one line.

For convenience, there are a few assertions whose names start with pod- which can be used to validate the pods status in a more effective way:

  • Use pod-ready to validate the pod readiness.
  • Use pod-restarts to validate the pod restarts count.
  • Use pod-not-terminating to validate no pod keeps terminating.

Here are some examples.

  • To assert all pods should be ready in a specified namespace or all namespaces:
kubectl assert pod-ready pods -n foo
kubectl assert pod-ready pods --all-namespaces
  • To assert there is no pod that keeps terminating in a specified namespace or any namespace:
kubectl assert pod-not-terminating -n foo
kubectl assert pod-not-terminating --all-namespaces
  • To assert the restarts of pods in a specified namespace or all namespaces should be less than an expected value:
kubectl assert pod-restarts -lt 10 -n foo
kubectl assert pod-restarts -lt 10 --all-namespaces

Detecting Objects Keep Terminating

The pod-not-terminating assertion can be used to detect pod that keeps terminating. However, it is not just the pod can be in such a situation. If you want to detect this for object other than pod, you can use exist-enhanced, not-exist-enhanced, or write your own.

As an example, to assert a custom resource where there is no instance that keeps terminating in any namespace, we can check if it has the metadata deletionTimestamp and the status.phase field is Running. When a resource gets deleted, a deletionTimestamp will be added as its metadata. If a resource is deleted but still running, this might be an instance that keeps terminating:

kubectl assert not-exist-enhanced MyResources --field-selector metadata.deletionTimestamp!='<none>',status.phase==Running --all-namespaces

As another example, to assert there is no namespace that keeps terminating in the cluster, we can check both metadata deletionTimestamp and finalizers. If a namespace has both, it is very likely that this namespace is being stuck in terminating status, because Kubernetes will not delete the namespace so long as there is any finalizer attached.

kubectl assert not-exist-enhanced namespace --field-selector metadata.deletionTimestamp!='<none>',spec.finalizers[*]!='<none>'

Summary

As you can see, using different assertions with the combination of different options can make KubeAssert very powerful to assert Kubernetes resources. In next post, I will show you how to extend KubeAssert by writing your own assertion when the default assertions provided by KubeAssert do not match your specific need.

You can learn more on KubeAssert by reading its online documents. If you like it, you can consider to give star to this project . Also, any contributions such as bug report and code submission are very welcome.

--

--

MorningSpace

Life is coding and writing! I am a software engineer who have been in IT field for 10+ years. I would love to write beautiful code and story for people.