Reflecting back on Kubecon North America 2020, a project that garnered a good amount of talks was Falco. If you are unfamiliar with Falco, Falco is an incubating project inside the CNCF that focuses on runtime security. Leveraging the blossoming support for eBPF, Falco adds the ability to describe security rules and take actions against violations.

From an event recording standpoint, it is pretty hard to get around the kernel level monitoring that Falco provides. Because of the ability to introspect at deep levels on Kubernetes and the sub-systems that support Kubernetes, Falco can be a great source of truth. Harness can facilitate the installation of Falco and workloads that Falco will intercept. In this example we go through Falco installation and deploying a sample workload.

Getting Started — Harness Prep

If this is your first time on the Harness Platform, getting up and started is a breeze. First, you will need to install a Harness Delegate into your Kubernetes cluster. If you don’t have a Kubernetes cluster, my favorite way to spin one up is EKSCTL which will spin up an Amazon EKS cluster for you.

#Create EKS Cluster
eksctl create cluster \
— name fabulous-falco \
— version 1.18 \
— region us-east-2 \
— nodegroup-name standard-workers
— node-type t3.xlarge \
— nodes 2
— nodes-min 1 \
— nodes-max 3 \
— node-ami auto

The Harness Delegate is a worker node that performs actions on your behalf and setup is a breeze.

Setup -> Harness Delegates -> Install Delegate. Select Kubernetes YAML and can give the name “falcofacilitator”

Click Download and expand the tar.gz that gets downloaded.

Inside the expanded tar.gz, the README.txt file has all the commands you need to get installed. The most prudent would be “kubectl apply -f harness-delegate.yaml” which you can run.

After a few moments, the Kubernetes Harness Delegate will appear in the UI.

With the Delegate installed, wiring up a Kubernetes cluster to Harness is also pretty snappy. The quickest way is to add the Kubernetes cluster as a Harness Cloud Provider.

To add a Cloud Provider, head to Setup -> Cloud Providers + Add Cloud Provider. Select Kubernetes cluster as the type. In the wizard, you can name the cluster “Falco Cluster” and since this is the Kubernetes cluster where we installed the Delegate, the Delegate can inherit the details about the cluster.

Hit Next and you will be all set to start leveraging your Kubernetes cluster.

Falco Prep

There are several installation methods depending on the depth and injection strategy of Falco. For simplicity in the example, you can install Falco as a DameonSet. This will ensure that a Falco Pod will run on every node in your Kubernetes cluster.

The good news is that the DameonSet installation of Falco is available in a Helm Chart. Harness has native Helm capabilities and has the ability to reference a Git Repository to retrieve the needed Helm Chart.

To wire the Git Repository to Harness, head to Setup -> Connectors -> Source Code Repo Providers + Add Source Repo Provider.

Display Name: Falco

URL: https://github.com/falcosecurity/charts

Branch: master

GitHub does require authentication to pull, the easiest way is to enter your GitHub credentials. If this is your first time, Harness will store your password as an encrypted secret.

Once you hit Submit, you will be wired up to the repository.

Harness and Falco — Installing Falco

With the wirings out of the way, now it is time to put Harness to work. We are going to install Falco via the Helm Chart then we are going to deploy a simple application to see what all has been generated by Falco. We will fill out the pieces of the Harness CD Abstraction Model to achieve the installation and deployment.

To get this started, you will need to create a Harness Application which is the lifeblood of any deployment.

Setup -> Applications + Add Application. Can name the Application “Falco is Watching”.

Once you hit submit, we can start wiring together the other pieces. Next on the list is a Harness Environment, basically where you want to deploy. In this case, we will use the Kubernetes cluster we are already leveraging.

To create a new Harness Environment, go to Setup -> Falco is Watching -> Environments + Add Environment.

Once you hit Submit, can define the Infrastructure Definition for the Kubernetes cluster. In the middle of the UI, click + Add Infrastructure Definition.

Name: Falco Cluster

Cloud Provider Type: Kubernetes Cluster

Deployment Type: Kubernetes

Cloud Provider: Falco Cluster

Once you hit Submit, your Infrastructure Definition will be complete.

Next, you will need to wire in a pair of Harness Services e.g the deployables. One will be for the Helm Chart and one will be for an Nginx container.

To add a Service, head to Setup -> Falco is Watching -> Services + Add Service. Let’s start with Falco first.

Once you hit Submit, can link to the remote Helm Chart with is in the GitHub Repository. Midway down under “Manifests” select the ellipses and click “Link Remote Manifests”.

Can fill in the details of the location of the Helm Chart in GitHub.

Manifest Format: Helm Chart from Source Repository

Source Repository: Falco

Branch: master

File/Folder path: falco [https://github.com/falcosecurity/charts/tree/master/falco]

Helm Version: v3

Once you hit Submit, you will be wired up to the remote Manifest.

Next can create a new Harness Service for Nginx. Setup -> Falco is Watching -> Services + Add Service.

Since Nginx is available via public Docker Hub, we can add that location as an Artifact Source. Click on + Add Artifact Source then Docker Registry.

Can fill out the details for Nginx. Harness ships with a link to the public Docker Hub [e.g “Harness Docker Hub” as the Source Server].

Docker Image Name: library/nginx

Harness provides all of the scaffolding you need for a basic Kubernetes deployment. Hit submit and you are all set for Nginx.

Next, let’s define a pair of Harness Workflows to deploy our Services. Setup -> Falco is Watching -> Workflows + Add Workflow. Can fill in the pieces that were created in the example.

Name: Install Falco

Workflow Type: Rolling Deployment

Environment: Falco K8s Cluster

Service: Falco Install

Infrastructure Defination: Falco Cluster.

Once you hit Submit, can do the same for Nginx. Setup -> Falco is Watching -> Workflows + Add Workflow.

Name: Deploy Nginx

Workflow Type: Rolling Deployment

Environment: Falco K8s Cluster

Service: Nginx

Infrastructure Definition: Falco Cluster

Once you hit Submit, you will have a pair of Workflows.

The last time to create is a Harness Pipeline so that we can execute both of these Workflows in order. Setup -> Falco is Watching -> Pipelines + Add Pipeline. Can name the Pipeline “Falco Example Pipeline”.

Can create a two-stage Pipeline, one per Workflow. To add a Stage, click on the +.

The first Pipeline Stage will be to Install Falco. Can select the Workflow that was previously created.

Next, add a second Harness Stage to Deploy Nginx.

With both Stages complete, your Pipeline should look like the following:

With that last twisty out of the way, can review all of the CD Abstraction pieces that were created.

Now it is time to execute your Pipeline and explore what Falco is capturing.

Execute Your Falco Pipeline

After the setup, time to run your Harness Pipeline. There are a few ways to do this, an easy way is to head over to Continuous Deployment in the left-hand navigation. Then select Deployments in the sub-menu then > Start New Deployment on the top right.

In the Start New Deployment prompt, will prompt you for the required pieces of data. The only required piece is which version/tag of Nginx you want to deploy. For the example you can select “latest”.

Click Submit and watch the Harness orchestration magic occur!

Exploring Your First Falco

Harness is able to lay down Falco’s Helm Chart and Nginx with ease. You need not install Helm to leverage Harness’s native Helm capabilities and can leverage Harrness’s scaffolding to quickly deploy Nginx into your Kubernetes cluster. Hitting submit you can see the two stage pipeline running through a Helm and Kubernetes deployments in each respective stage.

With Falco and Nginx running, you take a look at some of the log information Falco generates. There are two excellent examples from Amazon Web Services and Better Programming which demonstrate modifying a sensitive configuration of Nginx and having the default Falco rules trigger and log that event.

First you can see that Falco is running on both of our Kubernetes worker nodes with “kubectl get pod -o wide”. The “*-falco-* Pod names are running on both of my example Kubernetes worker nodes.

The “ip-192–168–54–210.us-east-2.compute.internal” node both has Falco and Nginx running and will be the node to look at the Falco logs with.

To check out the Falco logs that are currently generated by, run “kubectl logs <falco_pod_name>” which in my case is “kubectl logs release-53c46443–3cd8–313c-b0ed-10bc54c72705-falco-wp7tf”

The AWS and Better Programming example recommend accessing/modifying a sensitive location in the Nginx container, for example etc/shadow.

To do that SSH into the running Nginx instance by

“kubectl exec -it <nginx_pod_name> — bash” or in my case “kubectl exec -it harness-example-deployment-786b69bd8f-xbflj — bash

Then cat etc/shadow.

By re-running the Falco log command in the same Pod [kubectl logs <falco_pod_name>] you will see a message generated by Falco.

Just like that, you are up and running with Falco. There is a lot of the art of the possible with third party integrations and log forwarders. When investigating new cloud-native technologies, Harness has you covered.

Harness, Your Partner in New Technologies

The cliche saying that the only constant in technology is change marches on. With new approachces and ways of thinking coming of age, getting these new paradigms into your organization can be challenging. With Harness, the initial steps of operationalizing new technology is easy. If you have not already, feel free to sign up for a Harness Trial and start your journey today!

Cheers,

-Ravi

--

--