... because configuration is programming too
As described in previous posts I have
been experimenting with using container images and Helm charts with
kpt
. The hypothesis driving the experiments is in two parts:
- it’s highly desirable to be able to eyeball, diff, commit to git, and otherwise operate on configuration as data (i.e., YAML files)
- writing configuration as data means going without most of the tools – technical and mental – in the engineers' toolbox.
In other words, configuration is best authored as code, and best consumed as data.
The previous posts describe using kpt fn
as a way to drive the
generation of YAMLs from programs, and using kpt pkg
as the means of
consuming configurations. kpt fn
runs a container image and saves
the result out into YAML files. kpt pkg
imports YAML files and can
merge changes made upstream with changes you have made locally.
But there is a disconnect: importing or updating, and running a
function are two distinct steps – with kpt
you can have either
the merging, or running programs, but not both at the same time.
Using a Helm chart with kpt
For example, if you have a Helm chart you want to use in your
configuration, with kpt
you would need to either
- expand it ahead of time with
helm template
, and commit that as your package to distribute; or, - run it inside a function, perhaps operating on a declaration like
a
HelmRelease
YAML, and distribute the definition of the function in a package.
In the first case, you lose the ability to provide parameters to the chart, downstream. Your package is now just YAMLs, adapted to your specific needs. If you need to use configuration that only comes as a Helm chart, this is a way to access it. But, you end up with something less generally useful than the chart.
In the second case, you lose the ability to merge upstream changes – running the function again just overwrites any changes you have made.
To be clear, it’s a completely reasonable design decision to make kpt fn
and kpt pkg
disjoint – for the designers of kpt
, functions
are like Kubernetes controllers that are run on
files, expanding or otherwise acting on the
static YAML files. The functions are downstream from the
declarations in the package, which are considered definitive.
That’s just not how I want it to work.
Why not spresm
?
To further explore the premise given at the top, I made
spresm
. With spresm
you do not import other git
repositories, rather container images and Helm charts, which are
expanded in place. As with kpt
, updating a package will merge
upstream changes with local changes.
This is how you consume a Helm chart:
$ spresm import helm --chart https://charts.fluxcd.io/flux --version 1.5 flux/
You are prompted for parameters (release options and values), and the
chart is expanded using those parameters into flux/
.
Similarly, you can run a container image to generate configuration:
$ spresm import image --image gcr.io/kustomize-functions/example-nginx --tag v0.2.0 nginx/
Again, you are prompted to give parameters (this time, a
functionConfig
– see below for a suitable value for the above
image), and the image is run with that as input, and its output
written out into files.
The specification for how to generate files in <dir>
is written to
<dir>/Spresmfile
.
Once imported, commit the files. You can then edit that specification (e.g., to update the chart version) and re-run the expansion, which will merge changes in the output with the local files.
$ spresm update --edit nginx/
It’s early days for spresm
– it demonstrates that I can have what I
wanted, but it’s far from ready for serious use.
Appendix A – functionConfig for the example-nginx image
This image is an example from the kpt
function
catalog. It expects an input shaped like this:
metadata:
name: foo
spec:
replicas: 3
When editing the parameters for spresm
, this would look like:
functionConfig:
metadata:
name: foo
spec:
replicas: 3