Saturday, July 23, 2016

Influencing Quality At The Biggest Level Of The Test Pyramid Part 2

The last post I wrote acted as an introduction to a huge problem our team has finally solved. Implementing fully functional, self running, scheduled (or continously integrated) unit tests.

This post will get into the tools and methodology we used to do this.

Our Unit Test Design Problems


As I imagine all development teams striving to reach a higher level of quality maturity do, our team wanted to write more unit tests. For the last year or so, we struggled through figuring out how to use a framework that allowed us to do so. As mentioned before, our application under test isn't the most test friendly. When designed, testing was not really a driver, and so we have to work with what we have now, while trying to change it in small pieces. Hence, our decision to use MS Fakes (https://msdn.microsoft.com/en-us/library/hh549175.aspx)

MOAR cat -  UNIT TESTS


Problem 1: How To Implement Fakes

This problem really showed itself in the very beginning stages of our investigation into unit testing. We struggled to disseminate the knowledge about the nitty gritty implementation problems. Things like should we fake out everything? Should we do it in pieces? How do we speed up the compilation of builds with associated fakes on our local machines? We solved these problems along the way, through discovery of things like the ability to fake out only certain pieces of our business layer and not the whole thing. A lot of our learning came from Microsoft's documentation. Articles such as https://msdn.microsoft.com/en-us/library/hh549176.aspx, and the one mentioned above (https://msdn.microsoft.com/en-us/library/hh549175.aspx) helped us tremendously to understand how to implement unit tests with fakes for our individual classes, as individual developers and quality champions.

Problem 2 (The Big One): How To Motivate Our Team To Unit Test

So over the last year, we've figured out how to solve a lot of our individual technical problems. But then we realized, the biggest challenge would not be an individual implementation problem, but a team level.

We realized that in order to get the unit test movement going, we would need to provide a way for all of our developers writing code for our main behemoth application, to implement, run and measure progress, with respect to unit testing. This was our challenge, and we took it head on.

Step 1: Come up with a group of believers
We decided to treat our unit testing effort like a new born child. And as the saying goes, it takes a village to raise a child. So we created a village. One of our senior quality champions, spearheaded the effort of coordinating a group of developers interested in unit testing and all things quality, to form a work group to push the cause. The goal of this group was to show continous movement with respect to the "Big Rock" of unit testing. We made it a goal for all of our developers to be interested in this goal, and started to think of ways of implementing activities driven to push unit testing forward. The group came up with ideas to make unit testing interesting and fun. One of our ideas included disseminating knowledge about frameworks, and documenting that on our internal blog (confluence) page, another idea involved putting together a unit testing competition to provide a big lift with respect to the number of tests entered at one time. All these efforts and ideas were great, but needed a few things to fall into place. We needed to be able to show that a unit test can be ran against our main app, and we could measure our progress.

Beliebertho - BELIEBE in unit testing

Step 2: Provide The Tooling
We are lucky enough to work in a mainly homogeneous environment setup. We are mainly a Microsoft shop. We use C# as our language, mostly write code in Visual Studio, use TFS (Team Foundation Server) as our build/tracking server. We use Microsoft Test Manager for test tracking and Coded UI for GUI testing. So when it came to implementing another Microsoft testing framework (MS Fakes) we thought it would be easy to demonstrate...

It Was Not.


Our goal with respect to tooling and unit testing , was to provide our developers the ability to write unit tests, check them in to source control, be able to see them run against their code in the build environment and measure their progress.

We were able to use resources on the web to figure out how to write unit tests ourselves, in our local environments, and given unit testing and the best practices associated with it could be a blog all in its own, I will not focus on it here. Trust me when I say, we were able to figure out how to do it, and are making steady progress on creating tests for our app individually.

What I would like to focus on, is how we were able to run our unit tests against our environment and what tools we used to measure our progress.

Running MS Fakes Based Unit Tests on TFS 2015

As mentioned before, we are mainly an MS shop. So when it came to running unit tests, we focused on using TFS. Since we started our effort about a year ago, we tried to build our tests using last year's build system (https://msdn.microsoft.com/library/ms181715%28v=vs.120%29.aspx), namely the "XAML" build way. We tried to get this going, and after writing a test, checking it in, and attempting to build, it would not work.

Plain and simple: If you are trying to run MS Fakes based unit tests, DO NOT USE XAML builds.

They are super hard to configure, and problem ridden. We did a bunch of googling, and could not figure out how to get them building in our build environment. Apparently there are ways (http://hamidshahid.blogspot.com/2012/11/microsoft-fakes-framework.html), but for us, it was much easier to switch to the shiny new VNext build system. and follow the steps described in the MS documentation to create build definitions for testing. There are basically three

  1. Build your solution that includes tests and fakes assemblies
  2. Provide a testing step
  3. Publish test results (optional)
Build def, edit VS Test task

A detailed description of how to do this that helped out a lot can be found @ https://www.visualstudio.com/en-us/docs/test/continuous-testing/getting-started/getting-started-with-continuous-testing

Keeping Up The Motivation...AKA Measurement

We knew that in order to measure how far we've come, we needed to provide a tool for our devs to visualize how far they've come. This is where SonarQube came into play for us. In order to stay motivated ,we linked our unit test builds to sonar qube using the VNext builds.

SonarQube (http://www.sonarqube.org/)
SonarQube is an open source framework designed to cover code quality. Sonar, covers seven axis of quality, one of which is code coverage. This was the most interesting part of Sonar for us.

                                             

Implementing Sonar In TFS

In order to end up with code coverage metrics through sonar, we had to hook up sonar to our build platform. Using VNext builds, this hook up was relatively easy. All we had to do, is add some steps in our build definition, and hook up them up to our sonar server.

image

We basically had to hook up two tasks

  1. The very first task of our build: Begin SonarQube analysis
  2. The very last task of our build: End SonarQube analysis
A really good blog for how to implement the details of these two tasks can be found @ https://blogs.msdn.microsoft.com/visualstudioalm/2015/08/24/build-tasks-for-sonarqube-analysis/


So at the end of the day, we ended up with a dashboard like the one shown in the second graphic below. We are able to see not only the overall unit test coverage of our product, but drill into the individual files (like the first graphic), which is really powerful to see which code paths are not being covered. Sonar will now give us a way of figuring out where to add unit tests, to achieve really bad ass code quality.

Sonar showing areas covered by unit tests in files

Sonar dashboard showing unit test % coverage per file

Step 3: Share The Success Story
So now the final phase of our journey begins. We came from a place where unit tests, and automated testing in general, was but a fragile dream. One whisper of it not providing value, or false positives, and it would be discarded. It is now a proud strong gladiator, who is not afraid to reveal him or herself (is our automation effort a he? a she? I don't know). It is now up to us as developers and quality champions to continue growing our efforts. Because as we can tell, even though we only have 0.2% coverage, the tooling for all levels of testing is in place and ready to use.

Have a good weekend, I'm going sailing.



No comments:

Post a Comment