From code to production with Kubernetes via Gitlab

Warning: This is a technical post.

We decided to adopt Kubernetes as the company doubles down on container-based deployment.

I am probably the last code-savvy person in the company to understand Kubernetes since we migrated our deployments to Kubernetes. Given that I have some code to deploy, I spent three days learning Kubernetes.

This post will serve as a reference for myself or anyone that will like to pick up Kubernetes and get deploying on it as soon as possible.

Problem: To push code into Gitlab, and have it automatically be deployed via Kubernetes

As a developer, I want to deploy changes made to code. We use Gitlab as

  • our code repository
  • our continuous integration/delivery (CI/CD)
  • docker container registry

Instead of ansible, I will like to have our docker containers deployed via Kubernetes.

Why

If you are not sure why Kubernetes can be helpful for your specific use-case, don't use Kubernetes. A simple ansible playbook will solve needs and scale up to an intermediate complexity level.

Kubernetes (server software)

Kubernetes as an orchestration server software is complex. You need a team to maintain Kubernetes. This is why we opted for managed Kubernetes instances.

I can recommend DigitalOcean or Scaleway for their managed Kubernetes. Scaleway if you need cheap resources and latency matters less.

Digitalocean for a slightly premium offering but with local servers in Singapore and hopefully your area of business.

Terminologies

  • Cluster - a group of servers managed by Kubernetes
  • Pod - a set of docker containers running on a specific volume and network interface
  • Service - a policy defining access to a set of pods
  • Deployment - A set of Pod replicas

Installing kubectl and connecting to a cluster

You use kubectl to interact with an instance of Kubernetes. You have to install it. As a mac user, I use homebrew to install it as such:

$ brew install kubernetes-cli

Thereafter, you need to interact with a Kubernetes cluster. To achieve that, you need a kubectl file which is usually stored in ~/.kube/config.

I used Scaleway's Kubernete's implementation known as Kapsule. In Kapsule's case, they provided me with a .yaml file known as kubeconfig-steven-private.yaml right after I created a Kapsule cluster. To immediately get started, I can replace the default config by doing the following:

$ mv ~/.kube/config 
$ mv ./kubeconfig-steven-private.yaml ~/.kube/config

To test if this is working, all I need to do is to

$ kubectl get pods

And it should show an empty list. If you are not authenticated properly or connected to a Kubernetes cluster correctly, you will get an error instead.

Testing it locally

Instead of a managed Kubernetes cluster which costs money, you can test Kubernetes locally with minikube. As a mac user, I can install it as such:

$ brew install minikube

Mac quirks

minikube is weird on Mac. It's default virtual machine driver is something that is really buggy. So this is the way I recommend that you start minikube on a mac.

$ minikube start --vm-driver=virtualbox

Testing docker locally

Sometimes, you want to test docker images locally on minikube without pushing it to a remote container registry. To do that:

$ eval $(minikube docker-env)
$ eval $(minikube docker-env -u) # to stop

Writing Kubernetes manifests

You need two different types of manifests to successfully deploy an application that is accessible.

The first type of manifest is called a Deployment. This is how it looks like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-hello-world
  labels:
    app: node-hello-world
spec:
  replicas: 1
  selector:
    matchLabels:
      app: node-hello-world
  template:
    metadata:
      labels:
        app: node-hello-world
    spec:
      containers:
      - name: node-hello-world
        image: node-hello-world
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
      imagePullSecrets:
      - name: gitlab-registry

This Deployment manifest

  • provides meta data on the application.
  • tells Kubernetes how many times we want to replicate it identically
  • tells Kubernetes what the Docker container image is
  • tells Kubernetes what port it listens on
  • tells Kubernetes how to authenticate with the private registry.

The second type of manifest is called a Service .

apiVersion: v1
kind: Service
metadata:
  name: node-hello-world-svc
  labels:
    app: node-hello-world-svc
spec:
  selector:
    app: node-hello-world
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

This Service manifest

  • describes itself with a name and some labels.
  • tells Kubernetes which Deployment it wants to connect to (via selector)
  • tells Kubernetes that it wants to interact with the outside world via TCP protocol. And the ports it will be listening/routing to.

Making Kubernetes and Gitlab play nice

To deploy a code base, first, I have to compile the code into a Docker image. This is how it looks like in my gitlab-ci.yml.

variables:
    IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG

build:
    stage: build
    image: docker:stable
    services:
        - docker:18-dind
    variables:
        DOCKER_TLS_CERTDIR: ""
    before_script:
        - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    script:
        - docker build -t $IMAGE_TAG .
        - docker push $IMAGE_TAG

The next step is to deploy it in the Kubernetes cluster that I have created on Scaleway. To do that.

deploy_on_scaleway:
    stage: deploy
    image: ${CI_REGISTRY}/nubela/k8s-ci-tools:latest
    before_script:
        - export KUBECONFIG=$KUBE_CONFIG
    script:
        - cd k8s
        - kustomize edit set image node-hello-world=$IMAGE_TAG
        - kustomize build . > build.yml
        - kubectl replace --force -f ./build.yml

This pipeline in Gitlab's CI tells it to use a docker image known as k8s-ci-tools:latest. This is a custom image that we have built for internal use. But really you can replicate it by creating an alpine docker image with kubectl and kustomize installed.

Dynamic manifest declarations via kustomize

Every code change results in a new docker image with a fresh hash. So I have to update the deployment.yaml file dynamically on every code push.
kustomize is an abstraction for  Kubernetesyaml manifests so I can dynamically update manifests. (What do you call an abstraction on top of an abstraction of an abstraction?)

To do that, I have a k8s folder in my project, with three files:

  • kustomization.yml
  • deployment.yml
  • nginx_route.yml

kustomization.yml contains the following:

- deployment.yml
- nginx_route.yml

To change an image name from node-hello-world to a Gitlab CI variable called $IMAGE_TAG, I run the command:

$ kustomize edit set image node-hello-world=$IMAGE_TAG

Then I can compile the changes as such:

$ kustomize build

Provisioning kube-config file

kubectl needs a config file and the default file is always located at ~/.kube/config. But you can change the default with your own file, by providing an environment variable known as $KUBECONFIG.

To provide that to the build context in Gitlab's CI, I have a Gitlab CI CI variable set as File, with the contents of kubeconfig-steven-private.yaml provided within.

This is why I set this line in the Gitlab pipeline to provide the CI variable file to kubectl:  export KUBECONFIG=$KUBE_CONFIG

Getting a remote server to access Gitlab's private registry.

There are two types of credentials. There is a temporary Gitlab credential that is only valid during the execution of the pipeline. This is not good enough as Kubernete's orchestration happens in the background. So even if kubectl commands return successful, it does not signify that any orchestration work has begun. This means the temporary credentials are not good enough.

The other type is a Deploy Token. You can generate one in the project settings in Gitlab.

Once you have created a deploy token, you can do that like this:

$ kubectl create secret docker-registry gitlab-registry --docker-server=<PATH_TO_REGISTRY> --docker-username=<DEPLOY_TOKEN_USERNAME> --docker-password=<DEPLOY_TOKEN_PASSWD> --docker-email=<YOUR_GITLAB_EMAIL>

In this case, I have named the credential as gitlab-registry.

And this credential needs to be added under spec in the Deployment manifest as such:

      imagePullSecrets:
      - name: gitlab-registry

Using nginx to route http traffic

To use nginx to route http traffic amongst pods in my cluster, I have to install nginx first as a Ingress Controller. Ingress is a service, and an Ingress Controller is software to enable this particular Ingress service. In my case, with nginx.

To do so, I need to install nginx in the Kubernetes cluster. I will do this with helm. helm is a "package manager" for Kubernetes. First, I need to install helm locally.

$ brew install helm

Then I have to install tiller in the Kubernetes cluster. I need tiller in Kubernetes cluster for helm to work.

$ kubectl -n kube-system create serviceaccount tiller
$ kubectl create clusterrolebinding tiller \
  --clusterrole=cluster-admin \
  --serviceaccount=kube-system:tiller
$ helm init --service-account tiller
$ helm install stable/nginx-ingress --name hello-nginx # install nginx

Finally, I declare my route with an Ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: node-hello-world-ingress
spec:
  rules:
  - host: k8s-nginx-test.nubela.co
    http:
      paths:
      - path: /
        backend:
          serviceName: node-hello-world-svc
          servicePort: 80

Kubernetes is hard comprising of a serious amount of abstraction on top of abstractions. I will say avoid Kubernetes unless you know exactly what you are doing.

Steven Goh | CEO
Share:

Subscribe to our newsletter

Get the latest news from Proxycurl

Featured Articles

Here’s what we’ve been up to recently.