Kubernetes Openssl

 admin
  1. Kubernetes Openssl
  2. Openssl Kubernetes Secret
  3. Kubernetes Openssl Ssl_connect Ssl_error_syscall
  4. Kubernetes User Openssl

Overview

Intro

This post is a write up of steps needed for setting up a simple Kubernetes cluster running on (vSphere) VMs. I'm in the process of learning Kubernetes and studying for the Certified Kubernetes Administrator (CKA) exam where hands-on experience is key.

In the next step, you generate a Kubernetes Secret using the TLS certificate and private key generated by OpenSSL. The following example generates a 2048-bit RSA X509 certificate valid for 365 days named aks-ingress-tls.crt. The private key file is named aks-ingress-tls.key. A Kubernetes TLS secret requires both of these files.

Kubernetes Openssl

A lot of material focuses on getting started with Kubernetes through a Cloud provider or with stuff like Minikube, but as I have access to a home lab I wanted to practice using my own environment. With that said, the cloud provider route is a great one if you want to get up and running quickly, or haven't got access to a lab environment. The Kubernetes katacoda playground can also be a good place to start

  1. The article listed the steps necessary to generate self-signed certificates for Kubernetes using four methods: cert-manager, CFSSL, Easy-RSA, and OpenSSL. While the self-signed certificates should not be used in production, they provide an easy way to test the Web UI apps you deploy with Kubernetes.
  2. Set Hostname and add entries in /etc/hosts file. Run hostnamectl command to set hostname on.
  3. Openssl genrsa -out john.key 2048 openssl req -new -key john.key -out john.csr Create Certificate Request Kubernetes Object. Create a CertificateSigningRequest and submit it to a Kubernetes Cluster via kubectl. Below is a script to generate the CertificateSigningRequest.
  4. In the previous tutorial we learned about Authentication and Authorization in Kubernetes.With all of the authentication mechanisms we have learned, we need to craft a kubeconfig file that records the details of how we authenticate. Kubectl uses this configuration file to determine where and how to issue requests to the API server.

A big warning right of the bat: This post should not be used for anything production and it is not a comprehensive write up of Kubernetes

I would also argue that if you're using your own environment for bringing up Kubernetes clusters you should probably look at doing it with something like Ansible that can automate the setup.

Openssl

I'll not go in to lots of detailed around the different Kubernetes components in this post. Use the official Kubernetes documentation for that.

As I'm preparing for the CKA exam I'll also put in references to the documentation as we work our way through this post. The Kubernetes documentation is one of the resources allowed in this exam.

Prereqs

A Kubernetes cluster consists of master and worker nodes. Best practice is to have three or more master nodes for High availability and a set of worker nodes based on the cluster needs. I will go with three masters and three workers in my cluster.

The initial part of setting up the cluster will use one master and two workers, whereas the remaining nodes will be brought up later on to mimic a scale-out of the cluster (covered in an upcoming post).

I will also install haproxy on a Ubuntu VM to act as a loadbalancer when we are scaling out the cluster.

If you want to scale down you can skip the extra masters and workers and go with one master and one worker. Normally the master nodes won't run anything other than administrative cluster loads, but we'll see how we can get around this later on so the master also can act as a worker.

You can use multiple Linux distros for the nodes (and even Windows for the worker nodes). I'll use Ubuntu 18.04 in this setup.

The Kubernetes nodes will have 2 vCPUs and 8 GB RAM, and the HAproxy node will have 2 vCPUs and 4GB RAM. You can probably scale this down if you're not planning on running anything special in the cluster.

As the network add-on to the Kubernetes cluster I will use calico

My vSphere environment consists of three vSphere 7 hosts in a vSAN environment, but the underlying environment won't make much of a difference in this write up, you could probably do this with bare-metal or any other environment as well. Like RaspberryPIs for instance

We'll not do anything fancy with storage or network in this post to keep it simple

Build VMs

So with the prerequisites out of the way let's bring up some VMs.

In total I have 7 Ubuntu 18.04 VM's, all running on the same subnet and with static IP addresses.

VM nameRoleIP addressCPURAMDisk
kube-a-01Kubernetes master192.168.100.15128 GB20 GB
kube-a-02Kubernetes master192.168.100.15228 GB20 GB
kube-a-03Kubernetes master192.168.100.15328 GB20 GB
kube-a-04Kubernetes worker192.168.100.15428 GB20 GB
kube-a-05Kubernetes worker192.168.100.15528 GB20 GB
kube-a-06Kubernetes worker192.168.100.15628 GB20 GB
haproxy-a-01Load balancer192.168.100.15024 GB20 GB

VM preparation

There is some steps needed on every VM for preparing the installation of the Kubernetes cluster.

I deliberately go through this on each VM in my preparations for the CKA exam, normally you would build an image with this taken care of, or better yet use something like Ansible to automate the process.

Install packages

We need a few packages installed on each VM.

On all VMs I'll run

On the Kubernetes VMs I'll also run

On the Haproxy VM I'll run

Now to install the Kubernetes binaries we need to first add some references to where to find them. This is also documented in the official Kubernetes documentation

On the Kubernetes nodes I'll run

Note that I'm specifying the 1.18.1 version of the binaries. This is because I want to practice updating the cluster later on. If you're just looking for the latest and greatest skip the =1.18.1-00 part

We're also marking the packages with hold to keep them from updating when you run apt-get upgrade on the VM

Endpoint name

When bringing up the Kubernetes cluster, and because we eventually want to have multiple masters, we will specify the endpoint name which is what the nodes will bring with communicating with the masters.

On all Kubernetes nodes I'll edit the /etc/hosts file with a reference to my cluster endpoint name kube-master

In my scenario I want to bring up one master first before later on scaling out. Therefore I'll use the IP of my first master as the reference to the endpoint name and in an upcoming post on we'll change this and point to the load balancer.

Add the following lines to /etc/hosts on all nodes

Again, if you're sticking with one master you can skip the commented line #192.168.100.150 kube-master

Snapshot VMs

At this point it could be a good idea to create a snapshot of your VMs if you want to be able to come back to this state

Initialize the Kubernetes cluster

Now we're ready for installing the cluster!

We'll use the kubeadm tool with the init option to bring up the cluster. I'll also specify the endpoint we discussed earlier (control-plane-endpoint) and the subnet I want to use for the pod networking (pod-network-cidr). We'll see this in action shortly when we install a network add-on. You can use any network as long as it doesn't conflict with anything else.

This command takes some time, but in the end you should get a success message a long with some lines for joining more control-plane nodes (masters) and worker nodes.

The first thing however is to install a pod network in the cluster so the Pods can communicate. But before that I'll copy/paste the commands mentioned in the output to be able to use the kubectl tool as a regular user

Shell completion

You might also want to set up shell completion which makes it easier to run kubectl commands. I'm using the following commands for this on my master node

Install Pod network add-on

As mentioned I'll use Calico as my network provider. Flannel is also a solution that is easy to get started with

The installation is done with the kubectl tool, but first we need to download a yaml file that describes how the network add-on should be run and we need to verify/change the Pod network CIDR to match what we used in the kubeadm init command.

The process is described in Calico's documentation. Note that if you're preparing for the CKA exam you won't be allowed access to this documentation.

Find the CALICO_IPV4POOL_CIDR variable in the yaml file and replace the value with the same subnet you used in the kubeadm init command, save the file

Pay close attention to the indentation in the yaml file

Now let's install Calico in our cluster

This should create a lot of resources in your Kubernetes cluster

Verify current state

Let's verify our current state, note that it might take some time before the node is Ready

And we can take a look at the pods created in our cluster. Note that it might take some while before all pods are running

Add worker nodes

Now let's add some worker nodes to our cluster so we can deploy stuff.

The kubeadm init outputted the command for joining both control (master) and worker nodes to the cluster. You need to specify a token and some certificate hashes with the kubeadm join command.

The token mentioned however is only valid for 24 hours so if you're outside of that you need to generate a new token. Even though I'm inside of that in this example I'll regenerate the keys needed anyways

Generate token

Let's create a new token to use with when we join a new node

You can view the available tokens with the kubeadm token list command

Find certificate hash

To join a new worker node we also need the certificate hash of the discovery token. As mentioned this was outputted when we created the cluster, but we can also retrieve it with a openssl command

Update 2020-12-30 - Print join command

I came across the --print-join-command flag to the kubeadm token create command which actually prints out the kubeadm join command for you, i.e. no need to find the certificate hash and construct the command your self (although as in a learning process it's good to know that step as well)

Run kubeadm join

Now we're ready to join our first worker node to the cluster with the kubeadm join command. Note that this command is for joining worker nodes

Let's verify by running the kubectl get nodes command on our master, again note that it might take some time before the node is ready

Now we can use the same kubeadm join command as on the first node to add our second worker

And once more, verify the nodes in our cluster

In an upcoming post we will add two masters and one more worker to the cluster, but this will do for now.

Running workloads on the master node

Normally the master nodes won't run normal workloads as they are reserved for cluster work. If you have limited resources and want to run workloads on the masters as well you can remove this restriction with the following command

Deploying an application to the cluster

Lastly let's deploy something to the cluster to verify that things are working. We'll deploy a simple nginx webserver with no extra configuration.

Verify the deployment, it might take a few seconds before the deployment is ready

Now let's scale the deployment to two pods to see if both worker nodes can handle workloads

Now let's check our pods with the kubectl get podcommand with the -o wide parameter which gives more details

As we can see the pods are running on both of the two worker nodes

Service

Finally let's see if we can reach the nginx webserver from outside of the cluster by exposing a service

We'll first expose the deployment as a service with the NodePort type and point to the pod's port 80 with the kubectl expose command, and we'll retrieve the service with the kubectl get svc command

Let's see how our service looks like

Notice how port 80 in the Pod(s) is connected to port 31316 which refers to a port on the nodes. Let's try to access that port from outside of the cluster

A couple of things to note here. First we're accessing the nginx application from outside of the network by pointing our browser to a port on one of our nodes. Second, the IP we're using is the Master node's IP. Remember that the pods are running on the workers not the master.

By using NodePort we can point the browser to any one of the nodes and get access to the application

Summary

This post has been a write up of how to set up a simple Kubernetes cluster on virtual machines. By no means a comprehensive guide, but it helps me in learning and preparing for the CKA exam.

There's of course much much more to Kubernetes than this, but there's plenty of material out there to go deep diving in. I suggest to start at kubernetes.io.

I'll continue this setup in another blog post where I'll add two more masters and a third worker node to the cluster.

Thanks for reading!

This page was modified on January 3, 2021: Fixing k8s logo

Related Posts

  • CKA Exam Study Resources — December 31, 2020
  • Upgrading a Kubernetes Cluster — December 30, 2020
  • Scaling Out a Kubernetes Cluster with more VM nodes — August 14, 2020
  • CKAD Exam Study Resources — January 19, 2021
  • CKA Study notes - Resource Requirements and limits — January 19, 2021

Introductions

The Kubernetes Ingress resource can be annotated with arbitrary key/value pairs. AGIC relies on annotations to program Application Gateway features, which are not configurable via the Ingress YAML. Ingress annotations are applied to all HTTP setting, backend pools and listeners derived from an ingress resource.

List of supported annotations

For an Ingress resource to be observed by AGIC it must be annotated with kubernetes.io/ingress.class: azure/application-gateway. Only then AGIC will work with the Ingress resource in question.

Annotation KeyValue TypeDefault ValueAllowed ValuesSupported since
appgw.ingress.kubernetes.io/backend-path-prefixstringnil1.3.0
appgw.ingress.kubernetes.io/backend-hostnamestringnil1.2.0
appgw.ingress.kubernetes.io/backend-protocolstringhttphttp, https1.0.0
appgw.ingress.kubernetes.io/ssl-redirectboolfalse1.0.0
appgw.ingress.kubernetes.io/appgw-ssl-certificatestringnil1.2.0
appgw.ingress.kubernetes.io/appgw-trusted-root-certificatestringnil1.2.0
appgw.ingress.kubernetes.io/connection-drainingboolfalse1.0.0
appgw.ingress.kubernetes.io/connection-draining-timeoutint32 (seconds)301.0.0
appgw.ingress.kubernetes.io/cookie-based-affinityboolfalse1.0.0
appgw.ingress.kubernetes.io/request-timeoutint32 (seconds)301.0.0
appgw.ingress.kubernetes.io/override-frontend-portstring1.3.0
appgw.ingress.kubernetes.io/use-private-ipboolfalse1.0.0
appgw.ingress.kubernetes.io/waf-policy-for-pathstring1.3.0
appgw.ingress.kubernetes.io/health-probe-hostnamestringnil1.4.0-rc1
appgw.ingress.kubernetes.io/health-probe-portint32nil1.4.0-rc1
appgw.ingress.kubernetes.io/health-probe-pathstringnil1.4.0-rc1
appgw.ingress.kubernetes.io/health-probe-status-codes[]stringnil1.4.0-rc1
appgw.ingress.kubernetes.io/health-probe-intervalint32nil1.4.0-rc1
appgw.ingress.kubernetes.io/health-probe-timeoutint32nil1.4.0-rc1
appgw.ingress.kubernetes.io/health-probe-unhealthy-thresholdint32nil1.4.0-rc1

Override Frontend Port

The annotation allows to configure frontend listener to use different ports other than 80/443 for http/https.

If the port is withing the App Gw authorized range (1 - 64999), this listener will be created on this specific port. If an invalid port or no port is set in the annotation, the configuration will fallback on default 80 or 443.

Usage

Example

External request will need to target http://somehost:8080 instead of http://somehost.

Backend Path Prefix

Openssl

This annotation allows the backend path specified in an ingress resource to be re-written with prefix specified in this annotation. This allows users to expose services whose endpoints are different than endpoint names used to expose a service in an ingress resource.

Usage

Example

In the example above we have defined an ingress resource named go-server-ingress-bkprefix with an annotation appgw.ingress.kubernetes.io/backend-path-prefix: '/test/' . The annotation tells application gateway to create an HTTP setting which will have a path prefix override for the path /hello to /test/.

NOTE: In the above example we have only one rule defined. However, the annotations is applicable to the entire ingress resource so if a user had defined multiple rules the backend path prefix would be setup for each of the paths specified. Thus, if a user wants different rules with different path prefixes (even for the same service) they would need to define different ingress resources.

Backend Hostname

This annotations allows us to specify the host name that Application Gateway should use while talking to the Pods.

Usage

Example

Backend Protocol

This annotation allows us to specify the protocol that Application Gateway should use while talking to the Pods. Supported Protocols: http, https

Note1) Make sure to not use port 80 with HTTPS and port 443 with HTTP on the Pods.

Usage

Example

SSL Redirect

Application Gateway can be configuredto automatically redirect HTTP URLs to their HTTPS counterparts. When thisannotation is present and TLS is properly configured, Kubernetes Ingresscontroller will create a routing rule with a redirection configurationand apply the changes to your App Gateway. The redirect created will be HTTP 301 Moved Permanently.

Usage

Example

AppGw SSL Certificate

The SSL certificate can be configured to Application Gateway either from a local PFX cerficate file or a reference to a Azure Key Vault unversioned secret Id.When the annotation is present with a certificate name and the certificate is pre-installed in Application Gateway, Kubernetes Ingress controller will create a routing rule with a HTTPS listener and apply the changes to your App Gateway.appgw-ssl-certificate annotation can also be used together with ssl-redirect annotation in case of SSL redirect.

Please refer to appgw-ssl-certificate feature for more details.

Note* Annotation 'appgw-ssl-certificate' will be ignored when TLS Spec is defined in ingress at the same time.* If a user wants different certs with different hosts(multi tls certificate termination), they would need to define different ingress resources.

Use Azure CLI to install certificate to Application Gateway

  • Configure from a local PFX certificate file

  • Configure from a reference to a Key Vault unversioned secret id

To use PowerShell, please refer to Configure Key Vault - PowerShell.

Usage

Example

AppGW Trusted Root Certificate

Users now can configure their own root certificates to Application Gateway to be trusted via AGIC.The annotaton appgw-trusted-root-certificate shall be used together with annotation backend-protocol to indicate end-to-end ssl encryption, mulitple root certificates, seperated by comma, if specified, e.g. 'name-of-my-root-cert1,name-of-my-root-certificate2'.

Use Azure CLI to install your root certificate to Application Gateway

  • Create your public root certificate for testing

  • Configure your root certificate to Application Gateway

  • Repeat the steps above if you want to configure multiple trusted root certificates

Usage

Example

Connection Draining

connection-draining: This annotation allows to specify whether to enable connection draining.connection-draining-timeout: This annotation allows to specify a timeout after which Application Gateway will terminate the requests to the draining backend endpoint.

Usage

Example

Cookie Based Affinity

This annotation allows to specify whether to enable cookie based affinity.

Usage

Example

Request Timeout

This annotation allows to specify the request timeout in seconds after which Application Gateway will fail the request if response is not received.

Usage

Example

Use Private IP

This annotation allows us to specify whether to expose this endpoint on Private IP of Application Gateway.

Note1) App Gateway doesn't support multiple IPs on the same port (example: 80/443). Ingress with annotation appgw.ingress.kubernetes.io/use-private-ip: 'false' and another with appgw.ingress.kubernetes.io/use-private-ip: 'true' on HTTP will cause AGIC to fail in updating the App Gateway.2) For App Gateway that doesn't have a private IP, Ingresses with appgw.ingress.kubernetes.io/use-private-ip: 'true' will be ignored. This will reflected in the controller logs and ingress events for those ingresses with NoPrivateIP warning.

Usage

Kubernetes Openssl

Example

Azure Waf Policy For Path

Openssl Kubernetes Secret

This annotation allows you to attach an already created WAF policy to the list paths for a host within a KubernetesIngress resource being annotated.

The WAF policy must be created in advance. Example of using Azure Portal to create a policy:

Once the policy is created, copy the URI of the policy from the address bar of Azure Portal:

The URI would have the following format:

Note1) Waf policy will only be applied to a listener if ingress rule path is not set or set to '/' or '/*'

Usage

Example

The example below will apply the WAF policy

Note that the WAF policy will be applied to both /ad-server and /authKubernetes Openssl URLs.

Health Probe Hostname

This annotation allows specifically define a target host to be used for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined, host used in liveliness probe definition is also used as a target host for health probe. However if annotation appgw.ingress.kubernetes.io/health-probe-hostname is defined it overrides it with its own value.

Usage

Example

Health Probe Port

Health probe port annotation allows specifically define target TCP port to be used for AGW health probe. By default, if backend container running service has liveliness probe of type HTTP GET defined, port used in liveliness probe definition is also used as a port for health probe. Annotation appgw.ingress.kubernetes.io/health-probe-port has precedence over such default value.

Usage

Example

Health Probe Path

This annotation allows specifically define target URI path to be used for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined , path defined in liveliness probe definition is also used as a path for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-path overrides it with its own value.

Usage

Example

Health Probe Status Codes

This annotation defines healthy status codes returned by the health probe. The values are comma seperated list of individual status codes or ranges defined as <start of the range>-<end of the range>.

Usage

Example

Health Probe Interval

This annotation sets AGW health probe interval. By default, if backend container running service with liveliness probe of type HTTP GET defined, interval in liveliness probe definition is also used as a interval for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-interval overrides it with its value.

Usage

Example

Health Probe Timeout

This annotation allows specifically define timeout for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined, timeout defined in liveliness probe definition is also used for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-timeout overrides it with its value.

Usage

Example

Kubernetes Openssl Ssl_connect Ssl_error_syscall

Health Probe Unhealthy Threshold

This annotation allows specifically define target unhealthy thresold for AGW health probe. By default, if backend container running service with liveliness probe of type HTTP GET defined , threshold defined in liveliness probe definition is also used for health probe. However annotation appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold overrides it with its value.

Kubernetes User Openssl

Usage

Example