Friday, December 21, 2018

Setting Up A Local Kubernetes Environment On Windows

Hello Friends! 

Sorry I haven't written in a while. I could tell you that I've got busy with my 1-year-old and all the other stuff that I've been doing in my life to make our life with her as awesome as possible, but I do have to admit that I don't like to make up excuses. So sorry will have to do :)
                                          

In the last little while (few months), I've been focusing on investigating application telemetry approaches to expose the performance, and errors encountered by our apps. 

I've been dealing with applications developed on DotNetCore, but deployed to Kubernetes, and decided to deploy application telemetry dashboards (via Grafana, and Application Insights) to the same environment (Kubernetes). 
                                                      

In order to test the deployment of the dashboards and the sample application from which the data would be coming from, I needed to set up a development environment. 

This post will describe the process I followed to deploy my sample app, including the useful blog posts I read and implemented to get my app deployed to my local Kubernetes environment (K8s).


TLDR;

Given my machine is a Windows 10 machine and I was using Visual Studio as my sample app IDE, I needed to follow the below steps to get my environment up and running:

1. Install Docker For Windows, and Enable Your Local Kubernetes Cluster
2. Install the WSL (Windows Sub System For Linux)
3. Hook Up Kubernetes From Linux To Windows and Install Kubectl4. 
4. Deploy Your Docker Image To A Docker Registry
5. Pull down your docker image, and deploy it to your local Kubernetes cluster
6. Deploy The Sample Application

1. Install Docker For Windows, and Enable Kubernetes

First step: Install Docker For Windows on your machine. This is necessary so you can enable Kubernetes support through Docker For Windows. I'm sure there's a different way to do this, but for my use case, this was a fairly easy way to get Kubernetes up and running. I followed this tutorial and was mostly successful until I got to the step which described hooking up the cluster to my WSL installation...which didn't exist ;)

2. Install The WSL (Windows Subsystem For Linux)
In order to do this, I followed this blog post from Microsoft. I had some problems enabling the environment, but I think that was due to my specific network setup. I solved the problem in a first level google search via the error message

K8s WSL

3. Hook Up Kubernetes From Linux To Windows and Install Kubectl

After installing the WSL, I needed to finish my Kubernetes hookup from the WSL, to windows and install kubectl. The point of hooking up Kubernetes to the WSL, is that I could mount folders from Windows to the cluster. This meant that if I wanted to add files to be deployed by Kubernetes, such as containers, or specific dashboard files, I could. I also needed the "kubectl" command to work in the WSL bash. "Kubectl" is your interface (CLI) for accessing Kubernetes functions via bash (command line). It's the way that nearly everything is initiated from a user perspective. To do this, I went back to the Install Docker For Windows blog post (this one, mentioned earlier) and resumed at the "Installing Kubernetes CLI In WSL" step. I finished that step, and the following "Copying Kubernetes Config from Windows" step, and was all setup with kubectl.

My Dev Machine W Kubectl, Docker and local Kubernetes

4. Deploy Your Docker Image To A Docker Registry

Once I was confident that my environment was up and running (since I could run kubectl...which btw I will never get sick of saying ;)), I needed to create a sample application which I would deploy to a docker registry, and then pull down into the local kubernetes environment. I decided to increase the cuteness of  a sample .net core application template and create a sample app to honour my family. The source for the app is on this github repo if you are interested in seeing it's breakdown.

To deploy the app to the docker registry, I needed to setup a docker registry on docker hub and make use of visual studio's publish feature. I know, I know, this is probably not the best way to publish, given I am not using any CI/CD pipeline, and literally forcing it out to a publicly exposed registry, but given my goal was to figure out a process for deploying a dashboard which would monitor this sample app, I allowed myself to break the rules for this sample app. I followed this guide to create my docker-hub repo and basically right click published in the Visual Studio solution explorer.

NOTE: I published my app to a public docker-hub registry repo, but this was only for a proof of concept. My plan for my actual application which will be deployed, is to publish it to an enterprise docker registry, with proper security, etc.


Visual Studio + Dockerfile for example app

Visual Studio Right Click Publish...The LOECDA (MS DevOps Evangelists) would kill me!

Docker-hub registry after upload


5. Pull Down Your Docker Image and Deploy It To Your K8s Cluster

So let's summarize, to get to this point, I needed to install Docker and Kubernetes, the WSL for Windows, hook up the WSL to Windows and install kubectl, create a sample app (not covered in this post), create a Docker container and deploy it to my repo on docker-hub. That's a lot of stuff! That process took me a few days to work through, and after those days, I was eager to actually see my app launched on my local Kubernetes cluster. But there were a few more things I needed to do before I could reach my goal. 

The Deployment File

The deployment file controls which resources will be deployed to the kubernetes cluster. In my case, I deployed a service, and a deployment. The deployment resource takes care of deploying the service and pulling the image from the docker registry. Note that there are some hooks in the deployment which tie it to the service, and that the definition of how to pull the docker image follows a specific pattern (repo/app:tag). In my case, the tag for the application was specified in Visual Studio, but I think this can be done through docker commands as well. 

Kubernetes deployment file

Visual Studio docker hub deployment specifying tag

6. Deploy The Sample Application

Now that we've been able to finally get all the hookup done, it's time to deploy the app! This is where the magic of kubernetes really shines. To deploy the application, we will use the kubernetes cli (Kubectl) and tell it to perform the deployment by using the instructions in the deployment file above. We will use the "kubectl apply -f" command, which will specify which file to use as the deployment file. Kubectl takes care of setting up the resources (in this case the deployment and the service). We then use port forwarding to pass through to the actual pod where the app is running. 

Kubectl commands to deploy sample app

The sample application is live in my local kubernetes!

Welp. Now we all know how to set up a local kubernetes environment and deploy a sample application, from a docker repository to it! That's it for now!

Resources


Monday, January 15, 2018

CodeMash 2018 Recap


This past week, I participated in a conference at the Kalahari Resort, in Sandusky, Ohio. CodeMash is an annual developer conference. Judging by the speakers and participant volume, it seems to be one of the better attended and organized technical conferences in the midwest. My company sent myself and 5 of my friends to the conference to learn about and bring back new ideas which could improve our team. We traveled together, ate together, went out for beers together, and learned a crap ton. This post will summarize what I thought were some of the most important themes, tools, and ideas that I encountered.

  

Cross Functionality

As a quality champion attending what I thought was a developer-focused conference, I kind of expected to encounter mainly development talks and workshops. But turns out the beauty of CodeMash, is that at its' nature, it caters to all types of technologists. This year's tracks included Architecture, Data (big/small/otherwise), Design (UI/UX), DevOps, Enterprise/Large-Shop Development, Hardware/IoT, Mobile, Programming Principles, Project Leadership/Soft Skills, Security, Software Quality, Web/Front-End. I found myself focusing on Software Quality, Security and the DevOps tracks, with a few other session types sprinkled in. But it was refreshing to see all aspects of the SDLC represented, pushing the idea of a true cross-functional technologist forward.

Notable Sessions

Webapp Pentesting for Developers and QA Persons

This session was conducted by Brian King and focused on tools which could help developers and QA persons discover penetration testing. Brian did a really great job of differentiating what functional vs. Pentesting was and then went on to guide us through some common approaches to pentesting, using free tools. He walked us through example tests and drilled the idea home, that essential pentesting approaches can be carried out not only by specialists (like himself) but also by pentesting noobs (like me). I walked away with tools and approaches which I am excited to bring back to my team.


DevOps Zen: Injecting Automated Tests Into Infrastructure 




This session was conducted by Stephen Shary and focused on testing NGINX when it is implemented as a reverse proxy. I was honestly blown away by this session, not only because it was really well conducted, but because it introduced the idea of integration tests to testing the NGINX configuration. Stephen works for Kroger Technologies (yea, the grocery chain!) and ran into a problem, with testing his infrastructure. NGINX setup focuses on the premise of a configuration file(s) which specify the routing of traffic through the appliance, to the web applications which sit upstream of it. Stephen's teams maintained the configuration file in source control but ran into mega issues when changes were made and checked in. This caused him and his team to look for and eventually develop an open source integration testing framework called SnowGlobe.



The value of the framework is that when deployed, it mimicks upstream dependencies, effectively mocking your web apps, while running tests against your NGINX configuration(s). The framework comes wrapped in a nice docker container and is able to be integrated into a continuous integration flow quite easily. During the session, Stephen demonstrated how tests could catch erroneously checked in configuration changes, such as a poorly configured re-direction. Stephen's team is eager for other teams to adopt the framework and add to it, so he has extended an offer to help with teams trying it out. Watch out Stephen, our team is pretty eager to get some tests running!

Favourite Workshop

Devour The Cloud With Locust Swarms: Hands-On Load Testing


                                      

This workshop was run by Steven Jackson and Nick Barendt. It involved building a cloud-based load testing lab, and launching an application for testing (application under test) on AWS. We started with the launching of the infrastructure necessary for hosting the load testing infrastructure (load generator), and the application under test, which was a great lesson in itself. We then moved to writing simple scripts to run on the Locust load testing framework, and we followed that up with varying degrees of difficulty of load testing. Finally, we implemented fixes (introduction of caching) to our application under test and saw results of the fixes in the subsequent load tests.

This process was really awesome to walk through as it covered the full spectrum of what an engineer interested in performance would have to do. I've been to many workshops focused on writing tests, which do not give you an idea of the work necessary before any tests are written. Steven and Nick did an awesome job giving us the tools necessary to truly establish a load testing environment and run various degrees of difficulty of load tests. It was challenging, but due to crystal clear instructions in their Github repo, I did not have a problem completing the exercises.




Favourite Talk

Sondheim, Seurat, and Software: finding art in code



Due to unanticipated unfavorable weather forecasts, this was the last talk I saw at CodeMash 2018. But what a talk it was. Any time you get the chance to listen to one of the software industry's gurus, you just go. I must admit, before this talk, I was a bit skeptical, but I knew that if I had the chance to see Jon Skeet talk about anything relating to software, I should.

I was not disappointed. Jon's talk was one that I think transcended the traditional boundaries of technical, or soft skill talks which I encountered at Code Mash. Jon spoke of software and compared it to his favorite musical, "Sunday In The Park With George", by Stephen Sondheim. Jon talked about all types of lessons leading to the idea that developing systems, is similar to writing a play. He supported this by exemplifying design, composition, and light and drawing parallels to craftsmanship in both disciplines.

Listening to Jon Skeet speak of the SDLC, was less of a lecture, but more of a sermon. His passion for the higher ideals of craftsmanship and passion shone through above else, and really inspired the rest of us to think along the same ways. I was blown away by his ability to translate his experiences and make them relatable to our individual struggles.



Conclusion

This was my first time being at Code Mash, and I am excited to say I think I found a gem of a conference. Anyone who's ever been has told me it's pretty great, and now I can confirm it as so. I will be back, and in the future, I'll bring more of my family :)



Specific Resources Bookmarked At Code Mash

https://www.thoughtworks.com/radar/tools
https://github.com/Kroger-Technology/Snow-Globe
https://csfirst.withgoogle.com/en/home
http://cidrdb.org/cidr2015/Papers/CIDR15_Paper16.pdf
https://github.com/repsheet/repsheet-nginx/blob/master/spec/integration/integration_spec.rb
https://github.com/jemurai/nginx_workshop
https://github.com/stevenjackson/devour-the-cloud