Disclaimer
Disclaimer: This post will showcase some macOS-specific tools, primarily brew. This isn’t to make a platform statement; it’s simply to avoid the complexities of multi-platform discussion for brevity’s sake.
Over the Hill and Under the Hill
So, this is going to be a bit of a shameless plug, though all the praise towards my current employer, Splunk, is purely accidental :)
My journey here has been nothing short of remarkable, largely thanks to the incredible people I’ve had the chance to work with. One of them was Krishnan Anantheswaran, known for his work on Istanbul - a javascript coverage framework I used back in my front-end days. Krishnan interviewed me for the company, and for a few months, I soaked up as much knowledge as I could from him until his departure. Suddenly, I was in at the deep end, tasked with understanding a vast ecosystem. The component I was to take over wasn’t particularly challenging, but the surrounding environment was immense. As you can imagine, one of my key challenges was learning how to deploy our software effectively. This post is a reflection on the lessons I learned from this journey.
When the Kubernetes team was formed at Splunk, I was told that some teams were already using ksonnet, a tool for managing Kubernetes configurations similar to Helm. However, they quickly found out that ksonnet wasn’t the most user-friendly tool and was rather complex. Moreover, it was abandoned by its creators around the same time kubernetes team was coming together. Helm had its issues too, primarily because it uses Golang templates, which, while powerful, were primarily designed for HTML, XML, and text generation. These templates can control whitespaces, but they’re tricky to get right. A single mistake in YAML whitespace, and you’re in for a world of pain…
In contrast, ksonnet utilizes Jsonnet for templating, offering more complex and reusable configurations. Jsonnet enables capabilities like imports, functions, and variables, fostering modular and maintainable configurations. If only there was a tool that combined all the pros and eliminated the cons… Enter qbec (pronounced like the Canadian province) written by Krishnan, now a staple tool around here.
Later I will share some of the tools and conventions I find beneficial, such as folder structures and Makefiles, but let’s set the stage for the application we’ll be discussing.
The Application: Choosing the Right Example
I was grappling with the choice of a demonstration application. Initially, I considered a classic ToDo application, but it required incorporating storage elements like Redis or a database, which might dilute the focus of this post. Instead, I decided on a slightly contrived, yet very useful example - an RPN (Reverse Polish Notation) calculator.
Intermission: A Fun Fact About RPN
While there were uses of Reverse Polish Notation even in the 40s it was an australian philosopher Charles Hamblin who made final version of alogrithm and the notation, that adopted Jan Łukasiewicz’s Polish Notation for computers. It’s a suffix notation that aligns well with stack operations.
Fun fact
Hamblin initially suggested naming it “Azciweisakul notation” as a spelled backwards tribute to Jan Łukasiewicz's last name. Glad it didn't stick.
The RPN sequence is straightforward: first, the operands are entered, followed by the operation.
For example, the RPN for 2 + 2 * 5
would be: 2 2 + 5 *
.
Application Folder Structure
This brings us to our first convention. Although our example involves just one microservice, our setup is designed to accommodate multiple services per functional area.
❯ ls -R1
Makefile
README.md
go.mod
go.sum
src/
./src:
calculator/
./src/calculator:
main.go
rpn.go
rpn_test.go
Each service is placed in a separate folder under ./src/.
This folder may also include supporting microservices and tools, such as those for synthetic transactions or automated service operations. Sometimes, as tools grow or their use extends beyond the team, it becomes practical to move them to a separate repository.
Embracing Makefiles
We extensively use Makefiles here as well. Coming from a Makefile-heavy background in early C/C++ development on DOS and Unix platforms, I thought I was well-prepared. However, during my initial days, I found myself looking up numerous commands, which was a humbling experience. Initially, I wondered if they should be replaced with simpler tools like Ansible or custom Python scripts. But over time, I grew to appreciate their power and versatility, and now I use them in almost all of my projects.
There are at least two Makefile tricks worth mentioning. Here’s the first one:
❯ make help
help Show this help, automatically generated from comments in the Makefile
get Download packages and their dependencies from import paths
run Runs the service
test Runs unit-tests
fmt Format source file and basic housekeeping on imports
This self-documenting feature is quite handy. For every target, we add a quick comment, and then we can grep the Makefile itself:
.PHONY: help
help: ## Show this help, automatically generated from comments in the Makefile
@fgrep -h "##" $(MAKEFILE_LIST) | fgrep -v fgrep | sed -e 's/\\$$//' | sed -e 's/:.*##/ /'
.PHONY: get
get: ## Download packages and their dependencies from import paths
@echo "+ $@"
@go get ./...
.PHONY: run
run: ## Runs the service
@echo "+ $@"
go run ./src/calculator
.PHONY: test
test: ## Runs unit-tests
go test -count=1 -v ./...
The MAKEFILE_LIST
includes all files read by GNU Make, not just the Makefile in the current folder.
The second one allows us to generate multiple targets with a template. Let’s say we have 10 services in our folder, and wa want to create a ci_calculator_image
target but for each of them. Instead defining them by hand, we can do:
IMAGES := calculator otherservice yetanother
.PHONY: $(IMAGES:%=ci_%_image)
$(IMAGES:%=ci_%_image): ci_%_image: cicd/docker/Dockerfile.%
docker buildx build --tag $* -f $< .
and make will generate 3 targets called ci_calculator_image
, ci_otherservice_image
and ci_yetanother_image
for us. In the makefile above $*
expands to the original image name i.e. calculator
, and $<
to the dependent file i.e. cicd/docker/Dockerfile.calculator
.
Microservice Framework Selection
There are several excellent frameworks for routing HTTP traffic, such as Mux (gorilla/mux) and Gin (gin-gonic/gin). I chose to use Chi because it is 100% compatible with net/http and is built around the context package introduced in Go 1.7, which I frequently use. It’s a simple service that accepts a POST on / with application/json encoded content. This example, while a bit contrived, demonstrates how to decode JSON in a handler, which might be useful for those looking to build upon this foundation. With that said, I believe we’re ready to start the deployment process.
The Environment
Before diving in, let me give you a quick overview of our setup. I’ll be using minikube with a Docker container inside Minikube’s VM. For the container runtime, I’ll use Docker, and Qemu as the VM driver, coupled with a socket_vmnet network, enabling full networking capabilities for minikube under Qemu. I won’t delve into the details here, but you can find ample resources online, or feel free to reach out to me on the fediverse.
❯ brew install socket_vmnet qemu minikube docker docker-buildx
❯ brew services start socket_vmnet
❯ minikube start --driver=qemu --container-runtime=docker --network=socket_vmnet
❯ minikube -p minikube docker-env | source
Building docker image
A while back docker build
command got deprecated, and now we need to use buildx
. It works pretty much the same, and we will levarage stage builds to get rid off all the build and distro clutter. We will leave only our application and its runtime dependencies by using distroless Debian 11 container as a base for our final image.
# syntax=docker/dockerfile:1
FROM golang:1.21 AS build-stage
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY services/calculator/*.go ./
RUN CGO_ENABLED=0 GOOS=linux go build -o /calculator
FROM build-stage AS run-test-stage
RUN go test -v ./...
FROM gcr.io/distroless/base-debian11 AS build-release-stage
WORKDIR /
COPY --from=build-stage /calculator /calculator
EXPOSE 8080
USER nonroot:nonroot
ENTRYPOINT ["/calculator"]
or from within our example application with just:
❯ make ci_calculator_image
Deploying with Quebec
With the environment set, and docker image built, let’s focus on deployment. One practice I favor is creating a cicd/
subfolder for all non-local tasks, including advanced testing and Docker image building. This folder typically contains a Makefile for these tasks, which I include in the main Makefile, as well as folders for Docker images and qbec configurations.
So, what is qbec all about? It’s a tool that simplifies the creation and management of Kubernetes objects across multiple clusters and/or namespaces. Let’s install it and explore its capabilities.
Installation commands:
❯ brew tap splunk/tap
❯ brew install qbec
One of the biggest challenges for my team, given the scale of our operations, is managing about 20+ different Kubernetes clusters. We have three types of environments: dev, stage, and live, each with varying levels of traffic and activity. Internally, we also aim to optimize resource utilization – we don’t want to set resources too low to avoid frequent scaling, but also not too high to avoid wastage.
The question then is: How do we avoid configuration duplication while maintaining a high degree of flexibility? Let’s see how qbec can help us in this endeavor.
20+ Clusters
First of all, we define all our environments in qbec.yaml
.
apiVersion: qbec.io/v1alpha1
kind: App
metadata:
name: meetup-example
spec:
environments:
local:
context: minikube
defaultNamespace: default
cluster001:
defaultNamespace: default
server: https://192.168.1.1:8443
cluster002:
defaultNamespace: default
server: https://10.0.0.1:8443
now we can quickly apply changes to each environment with:
❯ qbec apply cluster001
There are many ways to customize settings for each cluster. The one qbec suggests out-of-the-box is to have an environments
folder with number of files. First we have base.libsonnet
for base environment, or _
for short. Other files would be named same as keys of entries in spec.environments
, so in our case local.libsonnet
, cluster001.libsonnet
, etc.
❯ ls -1R cicd/qbec/environments/
base.libsonnet
local.libsonnet
Our microservice’s kubernetes objects are defined in components/
, but we’ll get back to this. First, for base environment let’s define all the default parameters that we will later use while constructing our kubernetes resources. Something along the lines of:
{
components: {
calculator: {
name: "calculator",
serviceName: "calculator",
replicaCount: 3,
port: 8080,
externalServicePort: 8989,
serviceType: "NodePort",
enabled: true,
resources: {
requests: {
cpu: "200m",
memory: "200Mi",
},
limits: {
cpu: "300m",
memory: "500Mi"
}
}
}
},
}
then per environment we can use jsonnet’s object mergin notation +:
, which do the nested update of one object with another. Let’s say for local
environment we just want to have one replica of our service, because why would we need more?
local base = import './base.libsonnet';
base {
components +: {
calculator +: {
replicaCount: 1,
}
}
}
you can now compare the result looking at generated spec for both environments:
❯ diff (qbec --root cicd/qbec/ show local | psub) (qbec --root cicd/qbec/ show _ | psub)
12c12
< qbec.io/environment: local
---
> qbec.io/environment: _
15c15
< replicas: 1
---
> replicas: 3
53c53
< qbec.io/environment: local
---
> qbec.io/environment: _
Adjusting Properties Per Tier
There’s many default settings in qbec.yaml
, but we can also add our own properties
too. Here, for each environment we are going to add indication if it is a local, development, staging or production environment:
spec:
environments:
local:
context: minikube
properties:
envType: local
cluster001:
server: https://192.168.1.1:8443
properties:
envType: staging
cluster002:
server: https://10.0.0.1:8443
properties:
envType: production
With that in place, let’s see our object definition:
local paramMap = import "params.libsonnet";
local params = paramMap.components.calculator;
local imagePullPolicy = std.extVar("pull_policy");
local imageVersion = std.extVar("version");
local name = params.name;
local podLabels = {
app: name
};
local serviceLabels = {
service: params.serviceName
};
local target = std.extVar("qbec.io/env");
local targetProps = std.extVar("qbec.io/envProperties");
local getResources(props) =
if std.objectHas(props, "envType") && props.envType == "staging" then
{
limits +: {
cpu: "100m",
memory: "100Mi"
}
} else {};
{
apiVersion: "apps/v1",
kind: "Deployment",
metadata: {
name: name,
labels: podLabels,
},
spec: {
replicas: params.replicaCount,
selector: {
matchLabels: podLabels
},
strategy: {
type: "RollingUpdate"
},
template: {
metadata: {
labels: podLabels
},
spec: {
containers: [
{
name: name,
image: "calculator:" + imageVersion,
imagePullPolicy: imagePullPolicy,
ports: [{
containerPort: params.port,
name: "http",
protocol: "TCP"
}],
resources: params.resources + getResources(targetProps)
},
],
},
},
},
};
In the code snippet above, the params object is derived from settings we have statically configured for our environments. During each evaluation, params reflects the values computed specifically for the evaluated environment. It’s important to note that Jsonnet files always return an object. In our example, we’re returning the Deployment resource definition directly.
The highlighted lines play a crucial role in the dynamic calculation of properties. Essentially, we’re crafting a function that takes the environment type from qbec.yaml. Based on the environment type, this function generates an object with properties tailored to that specific environment. A few lines down, we will manually merge our base properties from props with this dynamically generated object. This process effectively replaces the original values with the new, environment-specific ones.
Postprocessor
In cloud environments, an essential aspect of management is cost tracking, which often involves adding specific labels to each application for team-based cost analysis. To streamline this process and avoid the repetitive task of manually labeling each object, we can implement a post-processor. Once configured in qbec.yaml
, this post-processor automatically appends the necessary per-team labels to every object we create. By doing so, we not only ensure consistent labeling across our application resources’ but also significantly simplify the management and tracking of cloud costs.
function (object) object {
metadata +: {
annotations +: {
"meetup.com/type": "meetup",
"meetup.com/technology": "golang",
},
},
}
Adjusting Resources Per Tier
A common challenge we encounter involves tailoring the resources we deploy to suit the specific needs of different environments. For instance, in our development environment, there might be no need for HorizontalPodAutoscaler or PodDisruptionBudget. Similarly, we might want to completely disable our service in cluster002
if it’s not required there. To address these variations, we can modify our component file slightly. Instead of directly returning a JSON object, we will define each resource individually and then combine them at the end. This approach allows us to have finer control over what gets deployed in each environment, ensuring that resources are allocated efficiently and appropriately.
local deployment = {
apiVersion: "apps/v1",
kind: "Deployment",
// ...
};
local service = {
apiVersion: "v1",
kind: "Service",
// ...
};
local hpa = {
// definition for HorizontalPodAutoscaler
};
local pdb = {
// definition for PodDisruptionBudget
};
local dev_resources = [deployment, service];
local all_resources = [deployment, service, hpa, pdb];
local isDevelopment(props) =
if std.objectHas(props, "envType") && (props.envType == "development" || props.envType == "local") then true else false;
local resources = isDevelopment(targetProps) then dev_resources else all_resources;
if params.enabled then resources else []
Final words
There’s certainly much more to explore beyond what we’ve covered here, but this should provide you with a solid foundation to get started. This article stems from a talk I delivered at the 11th Go Cracow meetup. For those interested in diving deeper, all the code discussed can be found on Codeberg. I encourage you to experiment and have fun with it! And if you wanna reach out, feel free to poke me on Fediverse.
Happy coding!
Comments
Discussion powered by , hop in. if you want.