Friday, December 21, 2018

Setting Up A Local Kubernetes Environment On Windows

Hello Friends! 

Sorry I haven't written in a while. I could tell you that I've got busy with my 1-year-old and all the other stuff that I've been doing in my life to make our life with her as awesome as possible, but I do have to admit that I don't like to make up excuses. So sorry will have to do :)

In the last little while (few months), I've been focusing on investigating application telemetry approaches to expose the performance, and errors encountered by our apps. 

I've been dealing with applications developed on DotNetCore, but deployed to Kubernetes, and decided to deploy application telemetry dashboards (via Grafana, and Application Insights) to the same environment (Kubernetes). 

In order to test the deployment of the dashboards and the sample application from which the data would be coming from, I needed to set up a development environment. 

This post will describe the process I followed to deploy my sample app, including the useful blog posts I read and implemented to get my app deployed to my local Kubernetes environment (K8s).


Given my machine is a Windows 10 machine and I was using Visual Studio as my sample app IDE, I needed to follow the below steps to get my environment up and running:

1. Install Docker For Windows, and Enable Your Local Kubernetes Cluster
2. Install the WSL (Windows Sub System For Linux)
3. Hook Up Kubernetes From Linux To Windows and Install Kubectl4. 
4. Deploy Your Docker Image To A Docker Registry
5. Pull down your docker image, and deploy it to your local Kubernetes cluster
6. Deploy The Sample Application

1. Install Docker For Windows, and Enable Kubernetes

First step: Install Docker For Windows on your machine. This is necessary so you can enable Kubernetes support through Docker For Windows. I'm sure there's a different way to do this, but for my use case, this was a fairly easy way to get Kubernetes up and running. I followed this tutorial and was mostly successful until I got to the step which described hooking up the cluster to my WSL installation...which didn't exist ;)

2. Install The WSL (Windows Subsystem For Linux)
In order to do this, I followed this blog post from Microsoft. I had some problems enabling the environment, but I think that was due to my specific network setup. I solved the problem in a first level google search via the error message


3. Hook Up Kubernetes From Linux To Windows and Install Kubectl

After installing the WSL, I needed to finish my Kubernetes hookup from the WSL, to windows and install kubectl. The point of hooking up Kubernetes to the WSL, is that I could mount folders from Windows to the cluster. This meant that if I wanted to add files to be deployed by Kubernetes, such as containers, or specific dashboard files, I could. I also needed the "kubectl" command to work in the WSL bash. "Kubectl" is your interface (CLI) for accessing Kubernetes functions via bash (command line). It's the way that nearly everything is initiated from a user perspective. To do this, I went back to the Install Docker For Windows blog post (this one, mentioned earlier) and resumed at the "Installing Kubernetes CLI In WSL" step. I finished that step, and the following "Copying Kubernetes Config from Windows" step, and was all setup with kubectl.

My Dev Machine W Kubectl, Docker and local Kubernetes

4. Deploy Your Docker Image To A Docker Registry

Once I was confident that my environment was up and running (since I could run kubectl...which btw I will never get sick of saying ;)), I needed to create a sample application which I would deploy to a docker registry, and then pull down into the local kubernetes environment. I decided to increase the cuteness of  a sample .net core application template and create a sample app to honour my family. The source for the app is on this github repo if you are interested in seeing it's breakdown.

To deploy the app to the docker registry, I needed to setup a docker registry on docker hub and make use of visual studio's publish feature. I know, I know, this is probably not the best way to publish, given I am not using any CI/CD pipeline, and literally forcing it out to a publicly exposed registry, but given my goal was to figure out a process for deploying a dashboard which would monitor this sample app, I allowed myself to break the rules for this sample app. I followed this guide to create my docker-hub repo and basically right click published in the Visual Studio solution explorer.

NOTE: I published my app to a public docker-hub registry repo, but this was only for a proof of concept. My plan for my actual application which will be deployed, is to publish it to an enterprise docker registry, with proper security, etc.

Visual Studio + Dockerfile for example app

Visual Studio Right Click Publish...The LOECDA (MS DevOps Evangelists) would kill me!

Docker-hub registry after upload

5. Pull Down Your Docker Image and Deploy It To Your K8s Cluster

So let's summarize, to get to this point, I needed to install Docker and Kubernetes, the WSL for Windows, hook up the WSL to Windows and install kubectl, create a sample app (not covered in this post), create a Docker container and deploy it to my repo on docker-hub. That's a lot of stuff! That process took me a few days to work through, and after those days, I was eager to actually see my app launched on my local Kubernetes cluster. But there were a few more things I needed to do before I could reach my goal. 

The Deployment File

The deployment file controls which resources will be deployed to the kubernetes cluster. In my case, I deployed a service, and a deployment. The deployment resource takes care of deploying the service and pulling the image from the docker registry. Note that there are some hooks in the deployment which tie it to the service, and that the definition of how to pull the docker image follows a specific pattern (repo/app:tag). In my case, the tag for the application was specified in Visual Studio, but I think this can be done through docker commands as well. 

Kubernetes deployment file

Visual Studio docker hub deployment specifying tag

6. Deploy The Sample Application

Now that we've been able to finally get all the hookup done, it's time to deploy the app! This is where the magic of kubernetes really shines. To deploy the application, we will use the kubernetes cli (Kubectl) and tell it to perform the deployment by using the instructions in the deployment file above. We will use the "kubectl apply -f" command, which will specify which file to use as the deployment file. Kubectl takes care of setting up the resources (in this case the deployment and the service). We then use port forwarding to pass through to the actual pod where the app is running. 

Kubectl commands to deploy sample app

The sample application is live in my local kubernetes!

Welp. Now we all know how to set up a local kubernetes environment and deploy a sample application, from a docker repository to it! That's it for now!


Monday, January 15, 2018

CodeMash 2018 Recap

This past week, I participated in a conference at the Kalahari Resort, in Sandusky, Ohio. CodeMash is an annual developer conference. Judging by the speakers and participant volume, it seems to be one of the better attended and organized technical conferences in the midwest. My company sent myself and 5 of my friends to the conference to learn about and bring back new ideas which could improve our team. We traveled together, ate together, went out for beers together, and learned a crap ton. This post will summarize what I thought were some of the most important themes, tools, and ideas that I encountered.


Cross Functionality

As a quality champion attending what I thought was a developer-focused conference, I kind of expected to encounter mainly development talks and workshops. But turns out the beauty of CodeMash, is that at its' nature, it caters to all types of technologists. This year's tracks included Architecture, Data (big/small/otherwise), Design (UI/UX), DevOps, Enterprise/Large-Shop Development, Hardware/IoT, Mobile, Programming Principles, Project Leadership/Soft Skills, Security, Software Quality, Web/Front-End. I found myself focusing on Software Quality, Security and the DevOps tracks, with a few other session types sprinkled in. But it was refreshing to see all aspects of the SDLC represented, pushing the idea of a true cross-functional technologist forward.

Notable Sessions

Webapp Pentesting for Developers and QA Persons

This session was conducted by Brian King and focused on tools which could help developers and QA persons discover penetration testing. Brian did a really great job of differentiating what functional vs. Pentesting was and then went on to guide us through some common approaches to pentesting, using free tools. He walked us through example tests and drilled the idea home, that essential pentesting approaches can be carried out not only by specialists (like himself) but also by pentesting noobs (like me). I walked away with tools and approaches which I am excited to bring back to my team.

DevOps Zen: Injecting Automated Tests Into Infrastructure 

This session was conducted by Stephen Shary and focused on testing NGINX when it is implemented as a reverse proxy. I was honestly blown away by this session, not only because it was really well conducted, but because it introduced the idea of integration tests to testing the NGINX configuration. Stephen works for Kroger Technologies (yea, the grocery chain!) and ran into a problem, with testing his infrastructure. NGINX setup focuses on the premise of a configuration file(s) which specify the routing of traffic through the appliance, to the web applications which sit upstream of it. Stephen's teams maintained the configuration file in source control but ran into mega issues when changes were made and checked in. This caused him and his team to look for and eventually develop an open source integration testing framework called SnowGlobe.

The value of the framework is that when deployed, it mimicks upstream dependencies, effectively mocking your web apps, while running tests against your NGINX configuration(s). The framework comes wrapped in a nice docker container and is able to be integrated into a continuous integration flow quite easily. During the session, Stephen demonstrated how tests could catch erroneously checked in configuration changes, such as a poorly configured re-direction. Stephen's team is eager for other teams to adopt the framework and add to it, so he has extended an offer to help with teams trying it out. Watch out Stephen, our team is pretty eager to get some tests running!

Favourite Workshop

Devour The Cloud With Locust Swarms: Hands-On Load Testing


This workshop was run by Steven Jackson and Nick Barendt. It involved building a cloud-based load testing lab, and launching an application for testing (application under test) on AWS. We started with the launching of the infrastructure necessary for hosting the load testing infrastructure (load generator), and the application under test, which was a great lesson in itself. We then moved to writing simple scripts to run on the Locust load testing framework, and we followed that up with varying degrees of difficulty of load testing. Finally, we implemented fixes (introduction of caching) to our application under test and saw results of the fixes in the subsequent load tests.

This process was really awesome to walk through as it covered the full spectrum of what an engineer interested in performance would have to do. I've been to many workshops focused on writing tests, which do not give you an idea of the work necessary before any tests are written. Steven and Nick did an awesome job giving us the tools necessary to truly establish a load testing environment and run various degrees of difficulty of load tests. It was challenging, but due to crystal clear instructions in their Github repo, I did not have a problem completing the exercises.

Favourite Talk

Sondheim, Seurat, and Software: finding art in code

Due to unanticipated unfavorable weather forecasts, this was the last talk I saw at CodeMash 2018. But what a talk it was. Any time you get the chance to listen to one of the software industry's gurus, you just go. I must admit, before this talk, I was a bit skeptical, but I knew that if I had the chance to see Jon Skeet talk about anything relating to software, I should.

I was not disappointed. Jon's talk was one that I think transcended the traditional boundaries of technical, or soft skill talks which I encountered at Code Mash. Jon spoke of software and compared it to his favorite musical, "Sunday In The Park With George", by Stephen Sondheim. Jon talked about all types of lessons leading to the idea that developing systems, is similar to writing a play. He supported this by exemplifying design, composition, and light and drawing parallels to craftsmanship in both disciplines.

Listening to Jon Skeet speak of the SDLC, was less of a lecture, but more of a sermon. His passion for the higher ideals of craftsmanship and passion shone through above else, and really inspired the rest of us to think along the same ways. I was blown away by his ability to translate his experiences and make them relatable to our individual struggles.


This was my first time being at Code Mash, and I am excited to say I think I found a gem of a conference. Anyone who's ever been has told me it's pretty great, and now I can confirm it as so. I will be back, and in the future, I'll bring more of my family :)

Specific Resources Bookmarked At Code Mash

Wednesday, November 1, 2017

Tech Dad Manifesto

On October 3rd, 2017 at 7:13am, I became a Dad. I was preparing for that moment for a long time, but I truly had no idea what the moment would bring. The minute my daughter Victoria arrived, my life changed from being the best me to be the best Dad. Immediately, I started thinking about new improvements to my personal and family process, which would positively impact Victoria and Brandi my wife. I blazed through books such as “Dude, You’re A Dad” and “Dad Jokes”, and believe it or not, even started investigating purchasing a minivan (GASP!). I was away from the office for about a month, and in that timeframe, got used to being #1Dad (I even have a plaque). I cooked, I cleaned, I shopped, and I did a lot of baby snuggling. The time I spent with my new family addition was awesome. 

Unimpressed Baby Is Unimpressed

Then reality set in, and I realized, that in order to continue being #1Dad, I would have to elevate my game at work, both in terms of process and delivery. Now even though our tech team alone added 11 new little humans to our families in the last year (2017), I understand that not all of us are caring for little ones. But I am confident, that all of us on the tech team, struggle with elevating our game at work while having enough time to care for or participate in the things we love.
Whether its kids, puppies, or video games, the struggle of maintaining a balance of consistent high performance and time spent on the things or people you care about outside of work is real! After spending a month at home with my daughter and wife, I realized I needed to tighten up my process to fully utilize every possible second of time toward the things which would provide the biggest gain in personal process efficiency. The below statement of action is what I challenged myself to follow:

The Tech Dad Manifesto

Be Purposeful In All Activities
I refuse to do things without an agreed-upon purpose. Activities that I cannot attribute to a predetermined goal do not lead to my teams or my own long-term success. Goals are established before the tasks are initiated. Goals are prioritized based on some sort of needs analysis. I understand that when finished, some goals may not turn out to be impactful. It is ok to only know what you know, and try and establish the best course of action with the information at hand. Ignore the noise.

Be Disciplined
I will eliminate all unnecessary distractions. Whether it’s closing browser windows, turning off my phone, or aggregating team member questions to a pre-determined “question time”. My time is precious, and I have to ensure it is best used. I will politely exit conversations or meetings which do not have a purpose that relates to my goal or task. I am the master of my own time, and while I understand that I am not perfect, I will strive to improve my time discipline every single day. Every second counts.

Be Honest
I will give and receive feedback immediately and constructively. When I see an opportunity for improvement, I will voice my opinion, even if it uncomfortable to do so. I will voice feedback with the intent of improvement and care. I will change my behaviour to ensure feedback is received as constructive, and not belittling or passive aggressive. When I receive feedback, I will take action on it, and ensure the feedback giver is treated with appreciation, even if the feedback is hard to hear. Do the right thing.

Be Comfortable With Being Uncomfortable
As a technologist, I understand the pace of our industry is very fast. I understand that in order to keep up, I have to always be learning and refactoring. I will not apologize for not being the master of all domains. I will be comfortable, with the fact that I need to keep learning, advancing, and that I may not remember all the things I have learned in the past. I will always strive to learn new things, document, implement and share my learnings. I will not be disheartened by the daunting task of implementing my big dreams. I will not become complacent in my thirst for knowledge and I will not give up. Obsessed with finding a better way.

Be A Team Player
I will believe in my team and contribute my ideas to their process. I will be engaged, and make clear expectations of my team members to be as well. I will not stand for disengagement and will seek to understand the reason behind its occurrence.  I will lean on my team and expect they lean on me in their time of need. I will foster and rely on relationships throughout the team, as a way of solving hard problems. We are the “they”.


Above all else, I will strive to achieve a balance between what I deem most important, and the activities to support said thing. I will not spend all my time on one while sacrificing the other, but I will not use the first as an excuse for not performing at the other. I will be purposeful, disciplined, honest, uncomfortable and a team player. Due to this approach, I will be able to provide for my family, while being the best geek I can be. That is how I will remain #1Dad and a top contributor on my tech team.

My 2017 Gilbie :)

WIFM (What's In It For Me)

I know, not everyone is #1Dad, or #1Mom…or plans to be, or even wants to be. But I am confident that every one of us wants more time to do the things we love, and most of us want to be awesome at what we do.

The above tenants were individually derived from the experience of the Crusaders team, and they have worked really well for us to date. I am a firm believer that they are highly reproducible for each and every one of us, and while not easy to perfect, will improve your ability to strike a balance between doing the things you love, and the thing that allows you to do those things. Ask yourself, what do you love to do? What is the reason you push yourself to be better every day? 

For me, that reason was re-defined on October 3rd, 2017 at 7:13 am.

Mission in life: #1DAD

Wednesday, September 20, 2017

QL Tech Con 17 Resources

Hey Ya'll!


In support of my talk at QLTechCon17, I'm posting a few resources that I thought you'd find useful:

Presentation Resources

1. My presentation
2. My twitter handle
3. The best technology team career site

Tool Examples: Orchestrators

4. TFS
5. Visual Studio Team Services
6. Jenkins
7. Git Test

Tool Examples: Execution Environments

8. Xamarin Test Cloud
9. Sauce Labs
10. Selenium Grid

Monday, August 28, 2017

Automation Delivery Pipelines

Hey, Friends! Yes, I know, it's been a while. It's been a busy summer for me. Vacations, family, and sailing kept summoning me to the outside world (priorities right?), which in the winter months seemed so dreary and uninviting.
But enough about my excuses. Let's talk about new ideas. In the closing stages of last year, I posted an idea about automation delivery channels and their importance. I outlined the idea, and why it is something we should care about. After a bit of thought, I would like to rebrand the idea a bit. From now on, I am going to refer to it as the  "Test Automation Pipeline". I think this verbiage speaks more to the topic as specific to integrating DevOps and testing. In this blog post I am going to expand on the earlier introduction and provide steps on how to make one!


A Test Automation Pipeline is a test execution delivery tool to enable fast reliable, scalable automated test execution. It should be created as early as possible in a test automation project. The pipeline consists of a test orchestrator and test execution environment. In order to setup the channel we will need to:

  1. Pick a test orchestrator and execution environment
  2. Hook up your test orchestrator to your execution environment
  3. Define your trigger strategy
  4. Define your execution steps 
  5. Educate your team

Test Automation Pipeline

Ah yes, the old 'Buzz Word Bingo' conundrum, "Synergy", "Efficiency", "Cloud Computing", etc. etc. etc. Is a Test Automation Pipeline one of these generic terms? Is the concept worth remembering? When it comes to test automation, you have to judge for yourself by answering a few questions which relate to your (or your team's) testing maturity.
  1. Can you (or any member of your team) run a single automated test whenever you please?
  2. Can you (or any member of your team) run a hundred automated tests whenever you please?
  3. Can you (or any member of your team) run a hundred tests, on different environments whenever you please? (ie. operating systems, browsers, devices, etc.) 
  4. Can you (or any member of your team) run a hundred tests, in different environments, and receive results fast enough to act on them?
  5. Can you (or any member of your team) repeat #4 as often as necessary?

If you've answered yes to question #5, then you should stop reading. Because you have achieved a high enough level of maturity, that the rest of this blog post will not teach you much. Go ahead, go do something else :). 

But if you are like the majority of the teams I've dealt with, your answers trailed off somewhere around Question 1 or 2, and I think I know why. Most teams dedicate a lot of time to test automation, specifically writing scripts. Transitioning to a high level of test automation maturity takes a lot of effort. A team (or an individual) needs to learn how to write scripts, run scripts, debug scripts, determine what is automatable, decide what to automate, communicate results. There's a lot of stuff to do!

I've been through this cycle myself. Our team went all in on test automation, we absolutely needed to. Our manual test efforts just could not keep up with the growing complexity of our application, we knew we needed to automate away as much of the repetitive checks which we found ourselves constantly performing. So we started designing a solution for implementing test automation. 

We decided that for the automation effort to be successful, the end result had to be able to be scalable, flexible, reliable and easy to execute...In other words, "Push Button, Get Test". To achieve the last mentioned goals, we decided that first, we needed a way to run the test automation that we would later build.We needed a way for anyone who wanted to, execute test automation, and get results to check their work quickly. At Title Source, we truly believe in cross functionality in our teams. A test suite can be executed automatically by a nightly trigger early in the morning, then by a Business Analyst who wants to check regression results of a specific test case, and finally by a developer implementing a new feature in the PM (likely in the late PM, since Devs are vampires and love the night time :) JK). Before writing any meaningful test automation code, we implemented a test automation pipeline. 

Steps To Implementing A Test Automation Pipeline 

1. Pick A Test Orchestrator And Execution Environment

Oh right, a test orchestrator...and and execution environment...of course! Wait, what are those? Were those thoughts that ran through your head? Let me enlighten you. 

1.1 Picking A Test Orchestrator
A test orchestrator is a device which will automatically organize and execute your tests. A test orchestrator does not have to (but can) have a single responsibility. A test orchestrator is absolutely necessary because it is responsible for organizing how, and where your tests are executed. It is also the device which receives feedback from your tests and organizes them in a digestible way. Examples of popular test orchestrators include the "Test Controller" which is used in Microsoft's legacy "Test Manager/Lab Manager" testing infrastructure (See helpful link #1 below), Microsoft's Team Foundation Server (or Visual Studio Team Services) and The Jenkins Build System. The last two mentioned products are great examples of how you can piggyback to an existing system, for testing purposes. The choice of which orchestrator to utilize for your team's (or your own) testing purposes is highly team specific. All three are great choices but your decision should be swayed by a few factors. 

A. Can I PiggyBack On To An Existing Product?
IE. is TFS already implemented in my environment? Does my team rely on Jenkins for building code? If the answer is yes, I would say that you will encounter the least friction in implementation and adoption, if you stick to what you have.

B. I have Money To Spend
If you do not have an already existing solution, do you have a budget? If you do not have limits, you can go all out and hire a consultant to implement the latest and greatest build system for you, which that consultant will then hook up for you and you can simply start writing tests to be executed in. I would argue that the majority of us operate in a world where the value you show, pays for the things you want to play with. In this situation, my choice of orchestrator would be VSTS (the Azure based TFS). Some will call me a Microsoft fan boy, but I have to say, the way that MS (Microsoft) has iterated this product from a testing perspective has been nothing short of amazing. VSTS has built up an impressive set of features for scaling, reporting and coordinating test executions. They (The VSTS Product Team) respond to problems via a GitHub and User Voice page and most importantly, operate on a 2-week release cycle. This ensures a stable, reliable flow of improvements. VSTS (and TFS) provides an easy to use test execution method, and out of the box dashboarding for test results. It is my choice for test orchestration. The VSTS feature schedule can be found in helpful link #3 below.

The VSTS Feature Timeline Is Public! So Cool!

C. I Do Not Have Money To Spend. Like None. I'm Broke.
If you do not have an already existing solution, and you do not have money to spend, a great test orchestrator is Jenkins. It's free as in speech. While you still will have to set it up and have a machine to run from, your cost is your time and not a credit card. Jenkins operates on a plugin model, so it's really great for hooking up to test suites which are being executed in environments which are not traditionally covered by pay for models. I found it very popular with teams developing mobile products, and requiring tests to be run on many devices.

Jenkins As A Test Orchestrator

There is no right or wrong choice. You have to decide what works best for your scenario. Look out for upcoming blog posts which provide examples of test environment hook ups to different orchestrators!

1.2 Picking A Test Execution Environment
A test execution environment is the set of machines or devices where your tests will run. It is the receiving end of the test automation pipeline. The test execution environment receives tests from the test orchestrator, runs them and gives feedback to the test orchestrator about the test runs. The test environment should consist of many machines (which could be mobile devices too!) that will be able to run tests in parallel. The choice of which test execution environment depends on your team's needs. Arguably, this is the most custom decision in the test automation pipeline process. Does your team support a product hosted on a website? Does your team support a mobile product? Does your team support a desktop based product? The answers to the last questions will shape your approach. In our case, it was yes to all three. So we went with loosely coupled virtual machines for our desktop product, Sauce Labs for externally facing applications, an internally hosted Selenium Grid for our web product, and Xamarin Test Cloud for our mobile products. We chose these products based on a combination of what we needed to test, existing infrastructure limitations, and future proofing. The general direction from our perspective was to offload as much maintenance of the environment to external vendors so we could focus on our core competency: writing proper tests which can be scaled. The requirements we had for environments included speed, consistency, and repeatability. Each of the environments we picked could be initialized and destroyed programmatically (via the test orchestrator or setup of the test script) and could be scaled to allow parallel test execution to achieve as quick of a feedback loop as we wanted to pay for. The vendors we chose guaranteed environmental consistency. Ironically, the environment we have the most problems with is our loosely coupled VM environment. Because we are not experts at maintaining the machines and their states, we see different performance and different network conditions, which provide environmental variability resulting in variable test results. The choice of test execution environment depends on what your team needs. Always ask yourself if the environment you choose will allow you to test your product(s) in a scalable, predictable way, which will provide your team quick feedback.

2. Hook Up Your Test Orchestrator And Execution Environment

Once you pick your test orchestrator and execution environment, you will need to hook them up. This step is mostly technical and very environment and orchestrator specific. You will see examples of this step in following blog posts which focus on individual examples. Having said that, the one thing which you absolutely cannot assume is communication between the execution environment and the test orchestrator. This is something which has to be confirmed as early as possible. In our environment, this is one of the larger hurdles we had to get over, due to infrastructure setup. The worst part of setting up the test automation channel was waiting for network infrastructure adjustments to allow communication from the test orchestrator to the test environment. It would be in the best interest of the test automation effort to get over this hurdle with at least one machine or device in the execution environment.

Hook Up One Machine To Ensure Connectivity!

3. Define Your Trigger Strategy

Once you have decided your test orchestrator, execution environment and confirmed at least one device or machine in the execution environment can communicate with the test controller, it's time to define how often your tests will run. This decision is again specific to what your team needs, the execution time of your tests, and the cost for execution. In our scenario, we execute all of our tests on a nightly basis. This means that we run our entire regression suite overnight, and constantly ensure that our execution pass rate is relatively high. We notice dips in pass rate on a daily basis and investigate them right away. This strategy works for us but does have disadvantages. Namely, we are spending a lot of time investigating broken (usually due to test code bugs) tests. Your team may want to run subsets of tests, on a nightly basis, and full regression once per build. The frequency of running tests depends on what type of feedback loop your team requires. From my perspective, the more often you release the more feedback you require. So as my team moves toward daily pushes, we will ensure to increase the frequency of our test runs. It is imperative to determine what feedback loop works for you and set your test automation pipeline to trigger your test runs to provide it (feedback) accordingly. Triggering your test automation run is usually controlled by your test orchestrator. As with the hookup, I will not discuss the technical details of the hook up in this post, but leave it to the platform specific posts coming up soon. The most important thing to remember is you will have to align the trigger strategy with your release strategy to give feedback quickly enough for it to be relevant. Your team may want your tests (or subset) to run on every check-in, or only when triggered manually. You have to decide what rate of feedback works for you.

More Feedback, Through Continous Integration!

4. Define Your Execution Steps

We have a functional test execution environment with a functional test controller. We have a defined trigger strategy, we are ready to execute tests and read those beautiful reports. Now how do we do that? Just push the "get test" button and read the result right? Well, maybe not. Running test automation and receiving results may involve a bit more than that. Before we run tests, we have to think of any pre-requisite tasks that need to be executed. As with implementing the trigger strategy, this is fairly environment specific but worth talking about from a strategic perspective, since it takes some thought. One has to think about executing any pre-requisites necessary for a test run. It is best to assume that our environment will be barren, and we have to ensure anything which we will need to add for our product under test to work, will have to be copied in, installed, or injected in some other way. We also have to think about where our test code is coming from, and how it will be executed. In most situations that I've experienced, we pull test source code from a repository, and then compile it, before being able to execute it. Since we are treating our environments as brand new every time, we have to ensure that any operating system level alerts or messages will be handled. I like to do this through registry manipulation (on Windows). So as part of the steps executed prior to test execution, I copy in any registry settings which could sabotage my run. After compilation, we also have to tell our test orchestrator where to find our tests. This step will enable to tool actually running the tests (for example MSTest) to use proper binary files for execution. Finally, we will have to ensure that the test results are easily accessible via reports. This last piece is a feature which is very mature in some orchestrators (for example TFS/VSTS) and needs a plugin in others (for example Jenkins). Test result generation can also be handled by some execution environments, usually, ones which are sold by third party companies, like Sauce Labs or Xamarin Test Cloud (Microsoft). It is important that we have a clear way of sharing results of our tests.

Think Through Your Test Execution Map

5. Educate Your Team

The last piece in establishing a successful test automation pipeline is probably the hardest. One of the biggest potential gains of test automation is its' ability to be executed and examined by folks who did not write it. The idea that a Business Analyst or Developer could use tests written by a Test Engineer to ensure quality is delivered for a feature change, allows a team to eliminate bottle necks and increase delivery speed. But in order to do this, they (your team) needs to be educated on how to run tests and analyze results.

Every team is different, but here are some strategies that have worked well for our teams:

1. "How To" Blog Posts
If your team doesn't have an internal blog, you should really get one. It's a great way to spread information. We use ours extensively to share knowledge, spark conversations, ask questions, even show calendars of events! I have written blog posts with respect to how to trigger automation, how to examine automation, and how the automated tests work. It has saved me countless repetitions when asked questions by new folks on our teams and has proved to be a great source of knowledge sharing.

2. Word of Mouth
Specifically daily automation briefs at your standup meeting. This is a real easy thing to do if you are examining the status of your test automation pipeline and automated runs on a regular basis. Determine consistent verbiage with respect to the state, for example, "green", "yellow", "red" and communicate the state of the product based on the number of failures, and the state of the test automation pipeline.

3. Regular Summaries
My team receives a daily test automation summary, which I put together from the dashboards that the test orchestrator provides for me. This daily reminder is key in your teams' awareness with respect to the success rate of the latest test automation execution, and the readiness of the test automation pipeline. In this communication, I provide a summary of the last two metrics (success rates, readiness) and a summary of how to reach specific metrics which I aggregate in a summary. The last mentioned summary should be delivered by whichever communication method your team is most in tune with. In our case, it's email, but it could be via Slack, HipChat, or even a daily blog post. It is important to mention that although this summary is of the test execution results, it is necessary to mention any test automation pipeline outages. That last part provides wide distribution in case a particularly important channel has to be fixed by many people.

Obviously, there are other ways of educating your team that are effective. Lunch & learns, training, floggings, whatever works for you. Just kidding about the flogging. The key from my team's experience is that a consistent regular message is sent to a wide audience. The focus of the communication should be the status of the test delivery channel and there should be reference points that show step by step instructions on how to execute and analyze the tests which the test automation pipeline executes. 

Early setup of a test automation pipeline really helped my team focus on writing tests that could be reliably executed, and scaled. By designing and implementing the channel before focusing on producing a significant amount of tests, we ensured that when we did produce a significant amount of tests (thousands of them!), we were not worrying about why tests were not able to be executed or were not executing fast enough. We have followed the above-outlined channel design process multiple times, and each time found that it enabled us to focus on figuring out what to test, instead of how to run tests.


In this post, we talked about what a test automation pipeline is, what it's value is, and how to set one up. We focused on the theory behind setting up the test automation pipeline. We talked about the specific pieces of setting up a channel (in theory). In the next two blog posts, we will look at specific examples of how to setup a test automation pipeline for different environments and test orchestrators.

Helpful Links