Saturday, July 23, 2016

Influencing Quality At The Biggest Level Of The Test Pyramid Part 2

The last post I wrote acted as an introduction to a huge problem our team has finally solved. Implementing fully functional, self running, scheduled (or continously integrated) unit tests.

This post will get into the tools and methodology we used to do this.

Our Unit Test Design Problems

As I imagine all development teams striving to reach a higher level of quality maturity do, our team wanted to write more unit tests. For the last year or so, we struggled through figuring out how to use a framework that allowed us to do so. As mentioned before, our application under test isn't the most test friendly. When designed, testing was not really a driver, and so we have to work with what we have now, while trying to change it in small pieces. Hence, our decision to use MS Fakes (


Problem 1: How To Implement Fakes

This problem really showed itself in the very beginning stages of our investigation into unit testing. We struggled to disseminate the knowledge about the nitty gritty implementation problems. Things like should we fake out everything? Should we do it in pieces? How do we speed up the compilation of builds with associated fakes on our local machines? We solved these problems along the way, through discovery of things like the ability to fake out only certain pieces of our business layer and not the whole thing. A lot of our learning came from Microsoft's documentation. Articles such as, and the one mentioned above ( helped us tremendously to understand how to implement unit tests with fakes for our individual classes, as individual developers and quality champions.

Problem 2 (The Big One): How To Motivate Our Team To Unit Test

So over the last year, we've figured out how to solve a lot of our individual technical problems. But then we realized, the biggest challenge would not be an individual implementation problem, but a team level.

We realized that in order to get the unit test movement going, we would need to provide a way for all of our developers writing code for our main behemoth application, to implement, run and measure progress, with respect to unit testing. This was our challenge, and we took it head on.

Step 1: Come up with a group of believers
We decided to treat our unit testing effort like a new born child. And as the saying goes, it takes a village to raise a child. So we created a village. One of our senior quality champions, spearheaded the effort of coordinating a group of developers interested in unit testing and all things quality, to form a work group to push the cause. The goal of this group was to show continous movement with respect to the "Big Rock" of unit testing. We made it a goal for all of our developers to be interested in this goal, and started to think of ways of implementing activities driven to push unit testing forward. The group came up with ideas to make unit testing interesting and fun. One of our ideas included disseminating knowledge about frameworks, and documenting that on our internal blog (confluence) page, another idea involved putting together a unit testing competition to provide a big lift with respect to the number of tests entered at one time. All these efforts and ideas were great, but needed a few things to fall into place. We needed to be able to show that a unit test can be ran against our main app, and we could measure our progress.

Beliebertho - BELIEBE in unit testing

Step 2: Provide The Tooling
We are lucky enough to work in a mainly homogeneous environment setup. We are mainly a Microsoft shop. We use C# as our language, mostly write code in Visual Studio, use TFS (Team Foundation Server) as our build/tracking server. We use Microsoft Test Manager for test tracking and Coded UI for GUI testing. So when it came to implementing another Microsoft testing framework (MS Fakes) we thought it would be easy to demonstrate...

It Was Not.

Our goal with respect to tooling and unit testing , was to provide our developers the ability to write unit tests, check them in to source control, be able to see them run against their code in the build environment and measure their progress.

We were able to use resources on the web to figure out how to write unit tests ourselves, in our local environments, and given unit testing and the best practices associated with it could be a blog all in its own, I will not focus on it here. Trust me when I say, we were able to figure out how to do it, and are making steady progress on creating tests for our app individually.

What I would like to focus on, is how we were able to run our unit tests against our environment and what tools we used to measure our progress.

Running MS Fakes Based Unit Tests on TFS 2015

As mentioned before, we are mainly an MS shop. So when it came to running unit tests, we focused on using TFS. Since we started our effort about a year ago, we tried to build our tests using last year's build system (, namely the "XAML" build way. We tried to get this going, and after writing a test, checking it in, and attempting to build, it would not work.

Plain and simple: If you are trying to run MS Fakes based unit tests, DO NOT USE XAML builds.

They are super hard to configure, and problem ridden. We did a bunch of googling, and could not figure out how to get them building in our build environment. Apparently there are ways (, but for us, it was much easier to switch to the shiny new VNext build system. and follow the steps described in the MS documentation to create build definitions for testing. There are basically three

  1. Build your solution that includes tests and fakes assemblies
  2. Provide a testing step
  3. Publish test results (optional)
Build def, edit VS Test task

A detailed description of how to do this that helped out a lot can be found @

Keeping Up The Motivation...AKA Measurement

We knew that in order to measure how far we've come, we needed to provide a tool for our devs to visualize how far they've come. This is where SonarQube came into play for us. In order to stay motivated ,we linked our unit test builds to sonar qube using the VNext builds.

SonarQube (
SonarQube is an open source framework designed to cover code quality. Sonar, covers seven axis of quality, one of which is code coverage. This was the most interesting part of Sonar for us.


Implementing Sonar In TFS

In order to end up with code coverage metrics through sonar, we had to hook up sonar to our build platform. Using VNext builds, this hook up was relatively easy. All we had to do, is add some steps in our build definition, and hook up them up to our sonar server.


We basically had to hook up two tasks

  1. The very first task of our build: Begin SonarQube analysis
  2. The very last task of our build: End SonarQube analysis
A really good blog for how to implement the details of these two tasks can be found @

So at the end of the day, we ended up with a dashboard like the one shown in the second graphic below. We are able to see not only the overall unit test coverage of our product, but drill into the individual files (like the first graphic), which is really powerful to see which code paths are not being covered. Sonar will now give us a way of figuring out where to add unit tests, to achieve really bad ass code quality.

Sonar showing areas covered by unit tests in files

Sonar dashboard showing unit test % coverage per file

Step 3: Share The Success Story
So now the final phase of our journey begins. We came from a place where unit tests, and automated testing in general, was but a fragile dream. One whisper of it not providing value, or false positives, and it would be discarded. It is now a proud strong gladiator, who is not afraid to reveal him or herself (is our automation effort a he? a she? I don't know). It is now up to us as developers and quality champions to continue growing our efforts. Because as we can tell, even though we only have 0.2% coverage, the tooling for all levels of testing is in place and ready to use.

Have a good weekend, I'm going sailing.

Influencing Quality At The Biggest Level Of The Test Pyramid Part 1

Hey Ya'll! The past couple days have been huge for me. As a quality champion, I have always been under the belief that it is my role to influence all areas of the test cycle. I have never been afraid to tackle problems related to quality outside of my domain. For the past few weeks I've been working on a doozy (sp?). Fully functional unit testing.

So You're Telling me - So You're Telling Me we've figured out how to run automated tests for all levels of the pyramid?

Background: Testing Pyramid

In the quality world, we like to often refer to the testing pyramid. As Marty Fowler states in a blog post from 2012 (, the testing pyramid is a visual representation of a proposed test strategy, which follows the belief that the majority of automated tests should be performed at the unit level, followed by service level tests (or integration tests) and finally by GUI tests. Although Mr. Fowler doesn't include them in his blog post, I also like to include manual tests at the very top of the pyramid, since as much as I love to have automation provide me with a pulse of how my application is behaving in predefined situations, I don't believe that we can ever get away from investigating odd, complex scenarios, through the human touch.

Testing pyramid by Scott Allister (
Testing pyramid by Martin Fowler (

For the past few years, I've motivated my team to work really hard on UI tests, as we came to a belief that they were the biggest bang for our buck. But wait, you may say, Maciek, isn't that the opposite of what the testing pyramid states? Well, yes, yes it is. 

Having said that, we made a conscious decision due to our circumstances. Our team members (quality champions) needed to first learn how to automate through code (see my previous posts about Top Gun), while providing value. We came from a world where the majority of the team did not know how to write code driven automation and we wanted to not only provide immediate value to our business partners (through quality), but also to provide a career long skill to our team members. That is what code driven GUI automation has given us.

After two years of training on the job, we have created a team that runs and maintains approximately 1500 automated GUI tests, which are run on a nightly basis, in a lab environment that is not their own machine :) The tests are relatively stable and execute at a regular pass rate of about 85-90%. All of the tests are hand rolled, and follow a programming pattern. Our mission for covering the highest level of our testing pyramid (automated GUI tests) and teaching our mostly non technical QA team how to write programs for testing is nearly complete. Our team is now focusing on perfecting the craft of writing automated tests, and seeing that they have a reliable suite of tests to run at any time by anyone. It is really AWESOME. 

Our team is re-focusing and starting to learn how to write integration tests within our environment and I'm confident that we will see that because of the skills picked up in GUI automation with respect to coding (in the same language that our application is written in (C#)), they will be able to knock out that level of testing much quicker than the GUI tests. 

So with that left to the team, a while ago I decided that I needed to focus on figuring out our biggest mission yet. 

Unit Testing: The Final Frontier

Captain Picard - Space is not the final frontier Unit Testing is

Pardon the Star Trek pun, but for our team, unit testing our main application has always been the elephant in the room. Our main business application is relatively old, and not super easy to test. About 6 months ago I started investigating why, and realized that our implementation of the framework we are using for business logic, doesn't allow for easy unit testing. We basically make use of a lot of private constructors, and methods, without interfacing. We are getting better at this, but need immediate solutions to provide unit test coverage.

I've heard all the arguments: Why don't you just re-write your application, or just interface everything, or even just plain out "don't do it. Tell your business that you will not do it that way". We know we have a problem, and are tackling it from different directions. I will not get into all of them right now, but would like to focus on what I think is the hardest one to fix: unit testing an application which cannot not expose public methods for all of essential pieces and does not have interfaces to be used for implementation for tests. 

Enter Microsoft Fakes


MS Fakes is a framework from Microsoft which basically allows you to isolate code for unit testing, at run time. Although fairly heavy, fakes allowed us to provide unit tests for our application which was not very test friendly. Our developers and some quality champions, have been successfully writing unit tests using this framework for about a year.  

Fakes replace other components

The above described framework gave us a way to unit test. It gave us the ability to start covering the unit test level of the testing pyramid. Some argue it is not the best unit testing framework, and that it is a very heavy handed approach. While I agree in theory, for us in practise it is the best that we can do for our current situation. And so, dear reader, I will summarize our current situation to you:

  1. Our quality champions are now writing fully automated, regularly running (scheduled) GUI tests
  2. Our quality champions are starting to write fully automated, regularly running (scheduled) integration tests
  3. Our quality champions (mostly our developers) are starting to write unit tests. And we've proved out that even though we are using a pretty heavy weight unit testing framework, it can be ran in an automated way, and measured to see how much progress we are making.
I think we've breached the last frontier of the testing pyramid, and are starting down the path of full stack test coverage. But to find out how we've figured out running MS Fakes based unit tests, you'll have to read the next post. 

Stay tuned :)

Captain Picard So Much Win! - Fully Automated Unit Test Runs Using Fakes They're Possible!

Sunday, July 3, 2016

Teaching QA Automation: What I Learned In The Latest Top Gun

Hello there!

Maciek Where Have You Been?

After a bit of an absence from my release, I'm back with another round of "insights" :) The past four months have been exciting, yet trying at the same time. It all begaGn with my trip to San Fransisco, for the \\Build2016 conference. I was very grateful to be given the opportunity to go back to the conference, and learned a crap ton of information. I specifically sparked my interest in Xamarin development, since it became a free tool, offerred by Microsoft, as opposed to a paid subscription for rich developers or companies :)

I came back from SF (San Fransisco) and immediately jumped into my latest version of the GUI Top Gun Training. 

Top Gun

In the infancy of the quality team I work with, the standard of automating was one person writing crappy VBScript in QTP. The "test framework" did not work, nor did it execute any test cases at any regular intervals. At that time, I was put into the position of "QA Lead" and handed over the keys to the automation kingdom. With the help of my buddy, the other QA Lead, and a senior developer, we designed a training, the goal of which, was to bring a hand code driven, pattern oriented design to our GUI automation efforts. 

We designed what morphed into a 6 week hands on training course, designed to turn manual testers into test engineers, who had the ability and excitement to spit out C# GUI Automated test cases, based on an adapted page object modelling pattern, using the Coded UI framework. 

We couldn't find engineers in Detroit who knew how to code and wanted to write test code. So we decided to train our own people to do just that.

The Latest Top Gun: Challenges Everywhere

After going through three iterations of the training and effectively converting about 10 folks from manual testers to test engineers, we came upon our biggest challenge yet. The latest intake of top gunners was no longer small, nor was it homogeneous with respect to background. 

GUI Top Gun morphed into a cross company training program. We had team members from my company and team members from the largest independent mortgage company, who had heard about Top Gun. We had team members who lived in California, and Florida. We had interns, we had full time team members...and we had me to instruct them all as effectively as possible :S The team was great, but it was definitely diverse.

The Biggest Challenge: Diversity of Communication

GUI Top Gun had experienced traditional diversity before.We taught folks with QA experience, and without. We even had one team member from a different company, but this time, the biggest challenge was how to incorporate team members who worked remotely. 

We knew that teaching coding for GUI automation was tough in person, and had a bit of a flavor in the past with some of our instructors working remotely at some times, but nothing like what we were facing this time. 

Of our 11 team members, 3 were working remotely. The thing with even one person working remotely, is that you have to make them feel as part of the conversation, as the folks 'in the room'. Traditional approaches such as conference calls, or even video calls, just don't cut it. The communication bandwidth is just not high enough. Even while keeping a focus on the fact that someone is "on the call", the person on the phone can not hear what is exactly going on in conversation in the room. Most importantly, the person on the phone cannot see and experiences the communication nuances that are happening in the physical communication, through body language. Even if the folks on the phone are super aware of their situation, and keep asking for repetition when they do not understand what is going on, they still miss out on the non verbal communication queues.

The Solution To The Communication With Remote Team Members: Equality of Medium

The answer to the communication problem as we experienced, was normalcy of communication methods. To ensure that all team members had exactly the same communication bandwidth, we forced nearly all conversations to happen over the phone, and through remote communication tools such as WebEx's Training Center. 

The initial plan for the program was to run the first week as a full time training on the phone, using WebEx Training Center as the lecture and interaction tool, and then get into a "War Room" to facilitate team member interaction. After that first week, we realized that although talking to people face to face was easy, talking to them on a headset and sharing a screen, was not that much harder, and it included everyone. It took a few days for team members who usually don't communicate over a headset to get used to the process, but once they did, most team members did not mind the approach. All of our lectures, and group sessions were facilitated through this medium, and we encouraged our team members to use the format when solving problems in small groups. 

At the end of the program, we got overwhelming positive feedback about the usefulness of the medium, and the personal approach it provided. Remote team members loved it, and team members who usually do not deal with remote team members learned a new approach to communication, and also expressed a positive experience from it.

The approach also allowed us to use WebEx Training Center to record all training sessions and archive them for future viewing. We now have a catalogue of all theory sessions recorded during the Top Gun GUI training program for future viewing by team members interested in automated GUI testing. 

All in all, I used to hate on remote training tools. But after this round of Top Gun GUI, I love them. 

Image result for headset clipart