Continuous Integration — Developer Getting Started Guide — Zero to Pipeline

Ravi Lachhman
10 min readJul 23, 2022

Continuous Integration is automated builds that can be triggered by some sort of event, such as a code check-in, merge, or on a regular schedule. The end goal of a build is to be deployed somewhere, and the main goal of Continuous Integration is to build and publish that deployable unit.

However, more than the compiled source code can go into a build. The end product for Continuous Integration is a release candidate. A release candidate is the final form of an artifact to be deployed. There could be quality steps taken to produce the artifact, such as finding bugs, and identifying their fix. Packaging, distribution, and configuration all go into a release candidate.

According to Paul Duvall, co-author of Continuous Integration, in a nutshell, CI will improve quality and reduce risk. Having a Continuous Integration approach frees teams from the burden of manual builds and also makes builds more repeatable, consistent, and available. If you are unfamiliar with CI, this guide will get you started on your first automated build.

Check out the video:

https://www.youtube.com/watch?v=pcD6cmUuejg

Your Local Build — Onramp to Continuous Integration

To create a build, you need to have something that can be built, which means source code. The steps you take to build and package your application or service need to be represented in a CI tool or platform for automation. CI platforms will need to connect to source code management e.g SCM to start the build process. This can be as simple as connecting your public GitHub Repository for something that needs to be built.

How to Build an App Locally?

Languages and package formats have build specific tools. As an example, here is a simplistic NodeJS Application that can be built into a Docker Image; the Dockerfile has specifics on how to build and package the app.

Sample App Repo:

https://github.com/ravilach/easy-node-docker

Building and packaging this sample application locally requires a few pieces, NPM and Docker.

If you don’t have those runtimes, on a Windows Machine, you can use Chocolatey to install, or if using a Mac, Homebrew.

NPM:

choco install nodejs

brew install node

Docker:

choco install docker

brew install docker

Once you build your application, you will need to store the binaries somewhere, in this case, the Docker Image.

Creating and Storing Your Image

Like any file you want to share with the world, storing them in an external spot makes them more accessible. A big benefit of using Docker as a packaging format is the ecosystem of Docker Registries out there. Your firm might have a registry provider. A good free registry for yourself is Docker Hub. If you do not have a registry available to you, you can create a Docker Hub account and create a registry, e.g “samplejs”.

With those pieces, you can build and push your image.

docker build — tag your_user/repo:tag .

docker push your_user/repo:tag

E.g in my case, at the root of the project:

docker build — tag rlachhman/samplejs:1.0.4 .

docker push rlachhman/samplejs:1.0.4

Can validate that this has been placed into the Docker Registry.

Simple enough locally to get your local build and packaging in. The next step is now to externalize this, which is exactly what creating a Continuous Integration Pipeline is all about.

Your First Continuous Integration Pipeline

If you took a closer look at what your machine was doing during those local builds, the machine was bogged down for a few moments. For yourself, that is fine, but imagine having to support 10’s or 100’s or even 1000’s of engineers, this process can be taxing on systems. Luckily, modern Continuous Integration Platforms are designed to scale with distributed nodes. Harness Continuous Integration is designed to scale and simplify getting your local steps externalized; this is the Continous Integration Pipeline. Let’s enable Harness Continous Integration to mimic your local steps and create your first CI Pipeline. Once you are done, you will have a repeatable, consistent, and distributed build process. There are a few Harness Objects to create along the way, which this guide will walk through step-by-step.

Starting off with Harness

Harness is a Platform, but we will focus on the Continuous integration module. First, sign up for a Harness account to get started.

Your onramp and workhorse in the Harness Platform is the Harness Delegate which can run in several places. For this example, using the Harness Kubernetes Delegate is the easiest.

Wiring The Harness Kubernetes Delegate

Harness works on multiple Kubernetes providers. An easy option is running a local Kubernetes cluster such as minikube or k3d. Or if you have another external Kubernetes environment, feel free to use that also. The Harness Delegate is a job runner that acts on your behalf. The Delegate can be used to spin up and down needed resources and directly interact with the host Kubernetes cluster.

#Install Minikube

choco install minikube

brew install minikube

#Start Minikube and Validate

minikube config set memory 8128

minikube start

kubectl get pods -A

Execute the set and start commands, and you will have a local Kubernetes cluster running on your machine, to which you can wire Harness resources to.

Once the cluster is running, time to wire a new Harness Delegate. A cloud hosted Harness Delegate by default is provided for you as part of the free tier infrastructure that is leveraged. In this exercise we will add another Delegate to your local Kubernetes cluster, mimicking the experience of what it is like to leverage Harness in your environment.

Harness -> Home -> Projects -> Default Project -> Project Setup + New Delegate.

Select Kubernetes

Click Next.

Delegate Name: minikubelocal

Delegate Size: Laptop

Delegate Tokens: default_token

Delegate Permissions: Cluster-wide read/write

Click Continue.

Download and apply the Harness Deletgate YAML.

kubectl apply -f .\harness-delegate.yml

Click Continue then Done, and your new Harness Delegate should be running.

Access To Your Sourcecode

Assuming you are leveraging GitHub, Harness will need access to the repository. For the example, providing a Personal Access Token that is scooped to “repo” will allow for connectivity.

If you have not created a Personal Access Token before.

GitHub -> Settings -> Developer Settings -> Personal Access Tokens

Name: harnessci

Scopes: repo

Expiration: 30 days

Make sure to jot down the token as the token will only be displayed once.

Now you are ready to wire in the pieces to Harness Continuous Integration.

Create Your First Pipeline

In the Build Module [Harness Continuous Integration], walking through the wizard is the fastest path to get your build running. Click Get Started. This will create a basic Pipeline for you.

Once you click Get Started, select GitHub as the repository you use, and then you can enter your GitHub Access Token that was created or being leveraged for the example.

Click Continue. Then click Select Repository to select the Repository that you want to build.

Select the repository then click Create Pipeline. The step to focus on will be Build.

Click Continue to define what infrastructure to run the build on. With the quick start, you can leverage Harness Provided infrastructure or your Kubernetes infrastructure e.g minikube. Below, will be leveraging your infrastructure.

First change the infrastructure to “Kubernetes”.

Then select the drop-down “Select Kubernetes Cluster”. Then + New Connector.

In the wizard, name the Kubernetes connection “myfirstcinode”.

Click continue. With Harness, you can use the same cluster the Harness Delegate is running on by selecting “Use Credentials of a specific Harness Delegate”. The Harness Delegate will facilitate all needed work on the Kubernetes cluster.

Click continue. Now select the Harness Delegate that corresponds to your Kubernetes cluster. In this case, “minikubelocal”.

Click “Save and Continue” and the connection will be validated.

Back in the Pipeline Builder, “myfirstcinode” will be listed.

Provide a Namespace and OS to run.

Namespace: default

OS: Linux [if using Windows WSL, Linux is the correct setting].

After the Build Infrastructure is set, now time to set up the Push step to push the artifact to a Docker Registry. In the Pipeline View, click + Add Stage and create a Staged called “Push”.Then click on “Set Up Stage”.

Click on “Set Up Stage”.

In the setup of the Stage, can leverage the infrastructure that the previous artifact build was run on by selecting “Propagate from an existing stage”.

Click Continue now, you can add a Step to represent the Docker Push. Click “Add Step”.

Select “Build and Push” image to Docker Registry.

Can create a new Push connector.

Name: pushtodocker

Next, set up the Docker Connector by clicking on the dropdown for Docker Connector.

Then create a new connector.

Can name the new docker registry connector “dockerhub”.

Click continue and can enter your credentials to Docker Hub.

Provider Type: Docker Hub

Docker Registry URL: https://registry.hub.docker.com/v2/

Authentication: your_user

Password: your_password [Will be saved as a Harness Secret]

Click Continue and select the Harness Delegate to execute on. This will be your Kubernetes infrastructure.

Click Save and Continue, and the connection will validate.

Then click Finish. Lastly, enter your Docker Repository and Tag information.

Docker Repository: your_account/your_registry

Tags: cibuilt

Then click “Apply Changes” and Save the Changes.

With those changes saved, you are ready to execute your first CI Pipeline.

Running Your First CI Pipeline

Executing is simple. Head back to your pipeline and click on “Run”. Unlike your local machine, where you had to wire in NPM and Docker dependencies, Harness CI will resolve these by convention.

Then you can select a branch to run off of and execute a step.

Branch Name: main [if using the example repo]

Now you are ready to execute. Click “Run Pipeline”. You are off to the races.

Once a successful run, head back to Docker Hub, and cibuilt is there!

This is just the start of your Continuous Integration journey. It might seem like multiple steps to get your local build in the platform, but it unlocks the world of possibilities.

Continuing on Your Continuous Integration Journey

You can now execute your builds whenever you want in a consistent fashion. Can modify the trigger to watch for SCM events so upon commit, for example, the Pipeline gets kicked off automatically. All of the objects you create are available for you to re-use. One part we did not touch upon in this example is executing your test suites in a similar format. Lastly, you can even save your backing work / have it as part of your source code. Everything that you do in Harness is represented by YAML; feel free to store it as part of your project.

Make sure to stay tuned and try more of the examples.

Cheers!

-Ravi

--

--