Saturday, December 31, 2016

Automation Delivery Channels: Part 1...What is it?

Hello, friends and Happy Holidays!



Over the past couple months, I've been dipping my toes into the mobile automation sphere. As evidenced by my previous blog posts, my experience is not in this sphere, and so it was an exciting challenge to take on. The reason for this effort was basically caused by our existing automated testing suite for our mobile apps, going stale. While there were automated tests designed, the one thing that was missing from them, caused the suite to become nonfunctional when the person who wrote it, temporarily left the team, which was a stable delivery channel.


What Is A Delivery Channel?


The concept of a delivery channel speaks not to the automated tests themselves, but to how and where they are executed. A delivery channel defines the flow an automated test takes from the moment a trigger executes the automation, to the delivery of test results. A delivery channel outlines the way a test suite is kicked off, where it runs, and how the test results are compiled. The concept of a delivery channel can be applied to most automation testing types. Whether it is for desktop automation, web automation, mobile automation or even unit tests, a stable repeatable way of kicking off, running and gathering results for automated tests needs to exist.

Why Is A Delivery Channel Important?


1. Avoiding Bottle Necks

In order to avoid situations where automation is executed on a machine, or a single device, and controlled by a single person or small group of people, it is extremely important to establish an execution channel very early in the lifespan of any automation effort. It is very easy for a team to get caught up in writing as many tests as quickly as possible, and incur technical debt when it comes to how they will be consistently run. But if one does not think of a consistent effort with respect to delivering the automated tests (execution), then the team will definitely create a bottleneck very quickly. Without having a stable, repeatable way of executing the tests that are hot and new, a team will not see full value from the tests. A team will get into a cycle of using someone's time (and devices or individual machines) to run the test suite, which inevitably sets one person or a small group of people to be the specialist. While this is good for the individual, it is not good for the team, since people do crazy people things, like switch teams or go on maternity leave.



2. Avoid Test Refactoring For The Wrong Reasons

While re-factoring a test is really great and should be done on a regular basis, refactoring tests for the purpose of scale is an avoidable exercise. If you are keeping in mind that the tests you are writing need to be executed at scale, since that is how you are executing them from the earliest stages of their lifespan, then you will naturally avoid designing them in a way that relies on a state of a single environment...like your computer. Having a delivery channel available for use from the earliest stage of an automated testing effort allows for this type of mindset!

3. Quick Feedback Loop

A flexible delivery channel also provides speed in execution. It is important to note that a stable delivery channel may not speed up individual test execution, but if implemented correctly, provides a consistent controllable way to scale the amount of machines or devices used during test execution. The more devices available, the quicker a test suite can run. Additionally, greater flexibility in device or machine choice provides your team with the ability to run tests on different devices as necessary, during bug triage.


4. Consistent Environment

A delivery channel also provides a consistent environment to execute tests more than once. This is important since, in some situations, you may need to repeat the execution of a test or suite, such as in cases where your tests fail and you want to re-run them to replicate the failure scenario. Because an automation delivery channel is controlled by configuration, it is repeatable and stable, which provides a way to ensure the tests you've run can be repeated on the same environments. More importantly, your team will always know what to expect with respect to the environment. 

Right about now, you're probably thinking (because I know what you're thinking right? ;)) Ok, so what. I'm convinced this is an important theory, and I'd like to know how. Well, you're in luck, in part 2 of this post, I'll give you examples of how I've done this in my journey in the past couple months. I'll show you how to implement a delivery channel for mobile automated tests. I'll give examples of how I've done this for Appium and Xamarin UI Tests, which are run on two different device clouds, all using TFS as the build and trigger platform. Stay tuned!

Sunday, October 16, 2016

StarWest 2016 Recap: Some Major Implementable Take Aways


StarWest 2016: Some Major Implementable Take Aways


Two weeks ago I attended the StarWest west conference in Anaheim, California. To give a quick synopsis of the conference, it basically is one of the most recognized QA conferences in North America. The conference is put on by a company called "TechWell", and it was focused on spreading new ideas and throughout the QA community. It was located at DisneyLand in Anaheim, California and I was there from Monday to Friday. The pre-conference workshops started on Monday, and the conference itself was Wednesday and Thursday. I also stuck around for the "Leadership Summit", which was an add on.






Industry Trends
One of the reasons I love attending conferences, is to figure out what trends are permeating throughout the industry I operate in. While my traditional conference of choice is still Microsoft Build, I decided that as a quality champion, I needed to inspect the traditional testing industry to ensure I wasn't missing things my team could benefit from. See, at my place of work, we push our teams to adopt new things everyday, and had the ability to basically design our process from the ground up, injecting a crap ton of technical focus along the early group forming stages. We focused a lot on ensuring we are developing our technical skills for testing, and growing environments which would allow us to express those skills in the art that is automation. Given the last mentioned skills are currently focused around a .Net environment, going to MS Build was very natural for me. But, as our  team matures, I wanted to see how we compared against other testing teams, and I wanted to ensure that from a strict quality perspective we were not missing out.


Total Team Testing


It was surprising to see how many conference attendees were struggling with adopting agile and understanding their role as testers in an agile process. There were at least three talks with respect to "Agile Testing" (there actually was an "Agile Testing Track"), and at least three talks with  respect to "DevOps & Testing" (again in its' own track). In numerous talks I attended, folks were throwing up their hands to identify themselves as part of  waterfall teams. In general, there was not a good consensus what it meant for testers to be agile, and numerous approaches were proposed as to what an agile tester should be and how to keep up with the increased velocity of development.
I noticed that we as an industry group, are trying to change the way we are thinking about testing.


Numerous sessions I attended attempted to identify what it meant for us as testers to be a part of the new way of doing things. The one that really resonated with me, was @BobGalen's talk about ideas during the second day of the workshops. Bob was really pushing the idea of testing being a team responsibility and the quality engineer adopt a "quality champion" role, as opposed to only testing. This approach implies that the test engineer becomes a sort of testing coach, and involves the team with respect to the testing process. He or she may be the architect of the test plan, but everyone on the team tests. I really liked this mindset, because for the most part at Title Source, we have adopted this mindset and have worked out a lot of the kinks that come with delivering value to the team, without testing everything ourselves.

Implementing Team Testing Take Aways
There were two main implementable take aways which I noted for transitioning from individual to team testing.

1. In Agile We Spiral Into Everything...Including Testing (Courtesy of @BobGalen)
The idea that things have to be 100% polished before delivery does not jive with the agile methodology. This lesson also applies to testing. As Bob explained to me, in agile testing, we need to focus less on concrete test artifacts, and more on understanding the domain knowledge of the team. The greater the domain knowledge of the team, the less formal documentation we as testers have to provide, and the more guiding we need to do. Having said that, if your team is brand new and does not understand the business process, you probably still need concrete testing artifacts to provide more guidance to the team (for example test cases). The more the team matures, and the more business knowledge you pick up the less structured your test artifacts can become, and the more you can trust your team's knowledge of what is "right". To me, this is the optimal scenario, because as a quality champion, you can then spend more of your energy on strategic test approaches, and learning new testing approaches you may not have thought of (or had time to execute) before...like performance or security tests.

Implement The Advice!
So to "Gump It Down" (sorry for the Quicken Loans Family of Companies jargon), the less business domain knowledge your team has, the more specific the test artifacts have to be. If your team is new, write test cases, if you have a good deal of business experience, allow for more exploration during the test process.


Shift Left And Use Automation To Do It!


As I was listening to @MaryHThorne preach about how to implement a behaviour driven development approach using a tool called SpecFlow, something dawned on me. As test engineers, we want to have more time to explore new test processes! I was listening to Mary and thinking of how the tool she was preaching about, could help me "shift left".

Shifting left essentially refers to testing early and often. As Mary was talking about SpecFlow, and basically wrapping automated methods in human readable verbs to be used during writing of acceptance criteria for user stories, I thought that this idea could help me and my team at the same time! The tool would allow me to be a bit more selfish, as giving others the ability to write automated tests directly for acceptance criteria would take some work off of my plate, and give me time to bring other testing processes into the fold.

I won't go into details as to how SpecFlow works, but the idea that a product owner could write acceptance criteria with the team and they would be translated into tests really excited me. The product owner would benefit from this model since he or she would be able to (would be forced to) gain intimate knowledge of the product he or she is driving (specifically the way it should behave). This would force him or her to interact closely with his or her team, and truly have a stake in the development process (since he or she wrote the criteria).

Additionally, the product owner would forced to become more technically savvy, since if he or she was writing acceptance criteria for automation, he or she would be forced to learn about the development environment (SpecFlow is an IDE package). Having said the above, we do have to understand that there is a bit of work to implement the SpecFlow verbs, and "CRUD" (Create, Read, Update, Delete) libraries. But after the initial investment, everyone would win. The product owner becomes more involved, the team gets awesome acceptance criteria, and the test engineer gets to focus on bringing in different types of test approaches, raising overall quality and as shifting left goes, everything happens earlier in the development process, since all of the above tasks can happen in parallel! Thanks @MaryHThorne, you inspired me!

Implement The Advice!
To implement the above, you need two things. An open minded product owner, some time to explore SpecFlow and write some initial libraries. Go to the SpecFlow link below, load it into Visual Studio, and create a simple Create Read Update library. Then talk to your product owner. Have him or her write some acceptance criteria for a story. Create a simple test using SpecFlow, show the product owner the test, repeat!

Overall Conference Rating
So as you saw above, I learned a lot brought back some implementable thoughts and had a great time (evidence below). I take conferences as an opportunity to take vacation around the area, and this was no exception. Just for the record, it's on my own $ and through my own organization efforts.

The conference provided a lot of inspiration and the opportunity to meet some great knowledgeable, fun people in the testing family. Our team is working on the ideas above (and more), and so I deem this conference a 4/5. The only reason I did not give the conference a last star, is because I did experience some speakers that were not 100% prepared for their talks. To those, I gave individual feedback. I hope they treat it like all feedback should be treated, a gracious gift in the process of continous improvement.




Links

StarWest
Shift Left Testing
SpecFlow: A DotNet Behaviour Driven Design Testing Tool

Twitter Handles Mentioned (aka worth following!)

@BobGalen
@MaryHThorne
@MichaelBolton

Friday, September 23, 2016

Coded UI Running On MS Lab Center Does Not Play Well With Private Network Screens!

Hey Friends,

Recently I came back from vacation and found all of our GUI tests in a non running state. Understandably I freaked out, and needed to do a bit (a week!) of investigation with respect to what was causing the problem. I figured it out eventually, and wanted to share the fix with you in case you ever run into this.

Problem
When running GUI automation on a Microsoft Lab Center environment, communication between the test controller and test agent cannot be established for GUI automation, when the agent (machine being utilized for GUI testing) presents a "Private Network Screen". More specifically, the remote user session which is necessary for GUI testing cannot be established. You can tell whether the private network screen is enabled by manually initiating a remote desktop session to your test agent. If you see the screen below, you will not have a good time.



Troubleshooting From Lab Center: Ensure The Private Network Screen Is The Cause
Sometimes test agents lose session connectivity with the test controller, leading to "Not Ready" test lab states. In most situations, these states can be remedied by "Repairing"...ie, right clicking on the test lab and choosing "Repair".

Attempt Repair
This is  the first thing you should try. Right click on the name of the test lab, enter the appropriate credentials and initiate the repair. You should see the test lab take action as evidenced in the screenshot below. Basically, the test controller will attempt to restart the machine and re-establish the session, even re-install the agent. The controller is pretty smart.... :)

Oh No. It Won't Repair
A typical repair will return your lab to a "Ready" state relatively quickly. Usually a little longer than it takes to reboot your test agent. If you are sitting around for a while, watching the repair process and noticing a long pause on the  "Waiting for agent to restart" state, leading to an eventual "Waiting for test agent to respond" state, start cursing, because the private network screen is blocking your session.



Solution

DISABLE THE PRIVATE NETWORK SCREEN!



Solution Implementation

Hacky Option: Part 1
Run a powershell script which is kicked off by the windows scheduler to remove a few registry keys on a regular basis. The regular basis is required if your test machine (agent) is influenced by an active directory policy, specifically one which implements the logon screen. The task screenshot can be found below. I had a few problems with the arguments so I am providing them below too. The arguments will start the script and spit out a log of errors.



Arguments: .\RemoveLogonBanner >> c:\RemoveLogonBanners.log
Start In: C:\Users\User1\Desktop\ScriptFolder

Hacky Option: Part 2

Write a script to remove the registry keys for the private network (logon) screen. An example of such a script can be found can be found on my github. Please note this script requires PowerShell 2.0 and the "PSRemoteRegistry" module, which can be found in the PowerShell Gallery.

Best Option

I had to live with the hacky option for a few days, before my awesome server dudes were able to get the policy change implemented. But they got rid of the screen by removing the test agent(s) from the AD policy which controls the "Logon screen". Talk to your IT admin, or server admin to help you with this, unless you have control over it yourself. After the policy is changed you may need to run the command line "gpupdate /force" to implement the policy change right away, otherwise you will need to wait for the regular policy update cycle.

Until next time!


Saturday, July 23, 2016

Influencing Quality At The Biggest Level Of The Test Pyramid Part 2

The last post I wrote acted as an introduction to a huge problem our team has finally solved. Implementing fully functional, self running, scheduled (or continously integrated) unit tests.

This post will get into the tools and methodology we used to do this.

Our Unit Test Design Problems


As I imagine all development teams striving to reach a higher level of quality maturity do, our team wanted to write more unit tests. For the last year or so, we struggled through figuring out how to use a framework that allowed us to do so. As mentioned before, our application under test isn't the most test friendly. When designed, testing was not really a driver, and so we have to work with what we have now, while trying to change it in small pieces. Hence, our decision to use MS Fakes (https://msdn.microsoft.com/en-us/library/hh549175.aspx)

MOAR cat -  UNIT TESTS


Problem 1: How To Implement Fakes

This problem really showed itself in the very beginning stages of our investigation into unit testing. We struggled to disseminate the knowledge about the nitty gritty implementation problems. Things like should we fake out everything? Should we do it in pieces? How do we speed up the compilation of builds with associated fakes on our local machines? We solved these problems along the way, through discovery of things like the ability to fake out only certain pieces of our business layer and not the whole thing. A lot of our learning came from Microsoft's documentation. Articles such as https://msdn.microsoft.com/en-us/library/hh549176.aspx, and the one mentioned above (https://msdn.microsoft.com/en-us/library/hh549175.aspx) helped us tremendously to understand how to implement unit tests with fakes for our individual classes, as individual developers and quality champions.

Problem 2 (The Big One): How To Motivate Our Team To Unit Test

So over the last year, we've figured out how to solve a lot of our individual technical problems. But then we realized, the biggest challenge would not be an individual implementation problem, but a team level.

We realized that in order to get the unit test movement going, we would need to provide a way for all of our developers writing code for our main behemoth application, to implement, run and measure progress, with respect to unit testing. This was our challenge, and we took it head on.

Step 1: Come up with a group of believers
We decided to treat our unit testing effort like a new born child. And as the saying goes, it takes a village to raise a child. So we created a village. One of our senior quality champions, spearheaded the effort of coordinating a group of developers interested in unit testing and all things quality, to form a work group to push the cause. The goal of this group was to show continous movement with respect to the "Big Rock" of unit testing. We made it a goal for all of our developers to be interested in this goal, and started to think of ways of implementing activities driven to push unit testing forward. The group came up with ideas to make unit testing interesting and fun. One of our ideas included disseminating knowledge about frameworks, and documenting that on our internal blog (confluence) page, another idea involved putting together a unit testing competition to provide a big lift with respect to the number of tests entered at one time. All these efforts and ideas were great, but needed a few things to fall into place. We needed to be able to show that a unit test can be ran against our main app, and we could measure our progress.

Beliebertho - BELIEBE in unit testing

Step 2: Provide The Tooling
We are lucky enough to work in a mainly homogeneous environment setup. We are mainly a Microsoft shop. We use C# as our language, mostly write code in Visual Studio, use TFS (Team Foundation Server) as our build/tracking server. We use Microsoft Test Manager for test tracking and Coded UI for GUI testing. So when it came to implementing another Microsoft testing framework (MS Fakes) we thought it would be easy to demonstrate...

It Was Not.


Our goal with respect to tooling and unit testing , was to provide our developers the ability to write unit tests, check them in to source control, be able to see them run against their code in the build environment and measure their progress.

We were able to use resources on the web to figure out how to write unit tests ourselves, in our local environments, and given unit testing and the best practices associated with it could be a blog all in its own, I will not focus on it here. Trust me when I say, we were able to figure out how to do it, and are making steady progress on creating tests for our app individually.

What I would like to focus on, is how we were able to run our unit tests against our environment and what tools we used to measure our progress.

Running MS Fakes Based Unit Tests on TFS 2015

As mentioned before, we are mainly an MS shop. So when it came to running unit tests, we focused on using TFS. Since we started our effort about a year ago, we tried to build our tests using last year's build system (https://msdn.microsoft.com/library/ms181715%28v=vs.120%29.aspx), namely the "XAML" build way. We tried to get this going, and after writing a test, checking it in, and attempting to build, it would not work.

Plain and simple: If you are trying to run MS Fakes based unit tests, DO NOT USE XAML builds.

They are super hard to configure, and problem ridden. We did a bunch of googling, and could not figure out how to get them building in our build environment. Apparently there are ways (http://hamidshahid.blogspot.com/2012/11/microsoft-fakes-framework.html), but for us, it was much easier to switch to the shiny new VNext build system. and follow the steps described in the MS documentation to create build definitions for testing. There are basically three

  1. Build your solution that includes tests and fakes assemblies
  2. Provide a testing step
  3. Publish test results (optional)
Build def, edit VS Test task

A detailed description of how to do this that helped out a lot can be found @ https://www.visualstudio.com/en-us/docs/test/continuous-testing/getting-started/getting-started-with-continuous-testing

Keeping Up The Motivation...AKA Measurement

We knew that in order to measure how far we've come, we needed to provide a tool for our devs to visualize how far they've come. This is where SonarQube came into play for us. In order to stay motivated ,we linked our unit test builds to sonar qube using the VNext builds.

SonarQube (http://www.sonarqube.org/)
SonarQube is an open source framework designed to cover code quality. Sonar, covers seven axis of quality, one of which is code coverage. This was the most interesting part of Sonar for us.

                                             

Implementing Sonar In TFS

In order to end up with code coverage metrics through sonar, we had to hook up sonar to our build platform. Using VNext builds, this hook up was relatively easy. All we had to do, is add some steps in our build definition, and hook up them up to our sonar server.

image

We basically had to hook up two tasks

  1. The very first task of our build: Begin SonarQube analysis
  2. The very last task of our build: End SonarQube analysis
A really good blog for how to implement the details of these two tasks can be found @ https://blogs.msdn.microsoft.com/visualstudioalm/2015/08/24/build-tasks-for-sonarqube-analysis/


So at the end of the day, we ended up with a dashboard like the one shown in the second graphic below. We are able to see not only the overall unit test coverage of our product, but drill into the individual files (like the first graphic), which is really powerful to see which code paths are not being covered. Sonar will now give us a way of figuring out where to add unit tests, to achieve really bad ass code quality.

Sonar showing areas covered by unit tests in files

Sonar dashboard showing unit test % coverage per file

Step 3: Share The Success Story
So now the final phase of our journey begins. We came from a place where unit tests, and automated testing in general, was but a fragile dream. One whisper of it not providing value, or false positives, and it would be discarded. It is now a proud strong gladiator, who is not afraid to reveal him or herself (is our automation effort a he? a she? I don't know). It is now up to us as developers and quality champions to continue growing our efforts. Because as we can tell, even though we only have 0.2% coverage, the tooling for all levels of testing is in place and ready to use.

Have a good weekend, I'm going sailing.



Influencing Quality At The Biggest Level Of The Test Pyramid Part 1

Hey Ya'll! The past couple days have been huge for me. As a quality champion, I have always been under the belief that it is my role to influence all areas of the test cycle. I have never been afraid to tackle problems related to quality outside of my domain. For the past few weeks I've been working on a doozy (sp?). Fully functional unit testing.

So You're Telling me - So You're Telling Me we've figured out how to run automated tests for all levels of the pyramid?

Background: Testing Pyramid

In the quality world, we like to often refer to the testing pyramid. As Marty Fowler states in a blog post from 2012 (http://martinfowler.com/bliki/TestPyramid.html), the testing pyramid is a visual representation of a proposed test strategy, which follows the belief that the majority of automated tests should be performed at the unit level, followed by service level tests (or integration tests) and finally by GUI tests. Although Mr. Fowler doesn't include them in his blog post, I also like to include manual tests at the very top of the pyramid, since as much as I love to have automation provide me with a pulse of how my application is behaving in predefined situations, I don't believe that we can ever get away from investigating odd, complex scenarios, through the human touch.

Testing pyramid by Scott Allister (https://watirmelon.com/tag/testing-pyramid/)
Testing pyramid by Martin Fowler (http://martinfowler.com/bliki/TestPyramid.html)

For the past few years, I've motivated my team to work really hard on UI tests, as we came to a belief that they were the biggest bang for our buck. But wait, you may say, Maciek, isn't that the opposite of what the testing pyramid states? Well, yes, yes it is. 

Having said that, we made a conscious decision due to our circumstances. Our team members (quality champions) needed to first learn how to automate through code (see my previous posts about Top Gun), while providing value. We came from a world where the majority of the team did not know how to write code driven automation and we wanted to not only provide immediate value to our business partners (through quality), but also to provide a career long skill to our team members. That is what code driven GUI automation has given us.

After two years of training on the job, we have created a team that runs and maintains approximately 1500 automated GUI tests, which are run on a nightly basis, in a lab environment that is not their own machine :) The tests are relatively stable and execute at a regular pass rate of about 85-90%. All of the tests are hand rolled, and follow a programming pattern. Our mission for covering the highest level of our testing pyramid (automated GUI tests) and teaching our mostly non technical QA team how to write programs for testing is nearly complete. Our team is now focusing on perfecting the craft of writing automated tests, and seeing that they have a reliable suite of tests to run at any time by anyone. It is really AWESOME. 

Our team is re-focusing and starting to learn how to write integration tests within our environment and I'm confident that we will see that because of the skills picked up in GUI automation with respect to coding (in the same language that our application is written in (C#)), they will be able to knock out that level of testing much quicker than the GUI tests. 

So with that left to the team, a while ago I decided that I needed to focus on figuring out our biggest mission yet. 

Unit Testing: The Final Frontier

Captain Picard - Space is not the final frontier Unit Testing is

Pardon the Star Trek pun, but for our team, unit testing our main application has always been the elephant in the room. Our main business application is relatively old, and not super easy to test. About 6 months ago I started investigating why, and realized that our implementation of the framework we are using for business logic, doesn't allow for easy unit testing. We basically make use of a lot of private constructors, and methods, without interfacing. We are getting better at this, but need immediate solutions to provide unit test coverage.

I've heard all the arguments: Why don't you just re-write your application, or just interface everything, or even just plain out "don't do it. Tell your business that you will not do it that way". We know we have a problem, and are tackling it from different directions. I will not get into all of them right now, but would like to focus on what I think is the hardest one to fix: unit testing an application which cannot not expose public methods for all of essential pieces and does not have interfaces to be used for implementation for tests. 

Enter Microsoft Fakes

(https://msdn.microsoft.com/en-us/library/hh549175.aspx).

MS Fakes is a framework from Microsoft which basically allows you to isolate code for unit testing, at run time. Although fairly heavy, fakes allowed us to provide unit tests for our application which was not very test friendly. Our developers and some quality champions, have been successfully writing unit tests using this framework for about a year.  

Fakes replace other components
https://msdn.microsoft.com/en-us/library/hh549175.aspx

The above described framework gave us a way to unit test. It gave us the ability to start covering the unit test level of the testing pyramid. Some argue it is not the best unit testing framework, and that it is a very heavy handed approach. While I agree in theory, for us in practise it is the best that we can do for our current situation. And so, dear reader, I will summarize our current situation to you:

  1. Our quality champions are now writing fully automated, regularly running (scheduled) GUI tests
  2. Our quality champions are starting to write fully automated, regularly running (scheduled) integration tests
  3. Our quality champions (mostly our developers) are starting to write unit tests. And we've proved out that even though we are using a pretty heavy weight unit testing framework, it can be ran in an automated way, and measured to see how much progress we are making.
I think we've breached the last frontier of the testing pyramid, and are starting down the path of full stack test coverage. But to find out how we've figured out running MS Fakes based unit tests, you'll have to read the next post. 

Stay tuned :)

Captain Picard So Much Win! - Fully Automated Unit Test Runs Using Fakes They're Possible!

Sunday, July 3, 2016

Teaching QA Automation: What I Learned In The Latest Top Gun

Hello there!

Maciek Where Have You Been?

After a bit of an absence from my release, I'm back with another round of "insights" :) The past four months have been exciting, yet trying at the same time. It all begaGn with my trip to San Fransisco, for the \\Build2016 conference. I was very grateful to be given the opportunity to go back to the conference, and learned a crap ton of information. I specifically sparked my interest in Xamarin development, since it became a free tool, offerred by Microsoft, as opposed to a paid subscription for rich developers or companies :)

I came back from SF (San Fransisco) and immediately jumped into my latest version of the GUI Top Gun Training. 



Top Gun

In the infancy of the quality team I work with, the standard of automating was one person writing crappy VBScript in QTP. The "test framework" did not work, nor did it execute any test cases at any regular intervals. At that time, I was put into the position of "QA Lead" and handed over the keys to the automation kingdom. With the help of my buddy, the other QA Lead, and a senior developer, we designed a training, the goal of which, was to bring a hand code driven, pattern oriented design to our GUI automation efforts. 

We designed what morphed into a 6 week hands on training course, designed to turn manual testers into test engineers, who had the ability and excitement to spit out C# GUI Automated test cases, based on an adapted page object modelling pattern, using the Coded UI framework. 

We couldn't find engineers in Detroit who knew how to code and wanted to write test code. So we decided to train our own people to do just that.


The Latest Top Gun: Challenges Everywhere

After going through three iterations of the training and effectively converting about 10 folks from manual testers to test engineers, we came upon our biggest challenge yet. The latest intake of top gunners was no longer small, nor was it homogeneous with respect to background. 

GUI Top Gun morphed into a cross company training program. We had team members from my company and team members from the largest independent mortgage company, who had heard about Top Gun. We had team members who lived in California, and Florida. We had interns, we had full time team members...and we had me to instruct them all as effectively as possible :S The team was great, but it was definitely diverse.

The Biggest Challenge: Diversity of Communication

GUI Top Gun had experienced traditional diversity before.We taught folks with QA experience, and without. We even had one team member from a different company, but this time, the biggest challenge was how to incorporate team members who worked remotely. 

We knew that teaching coding for GUI automation was tough in person, and had a bit of a flavor in the past with some of our instructors working remotely at some times, but nothing like what we were facing this time. 

Of our 11 team members, 3 were working remotely. The thing with even one person working remotely, is that you have to make them feel as part of the conversation, as the folks 'in the room'. Traditional approaches such as conference calls, or even video calls, just don't cut it. The communication bandwidth is just not high enough. Even while keeping a focus on the fact that someone is "on the call", the person on the phone can not hear what is exactly going on in conversation in the room. Most importantly, the person on the phone cannot see and experiences the communication nuances that are happening in the physical communication, through body language. Even if the folks on the phone are super aware of their situation, and keep asking for repetition when they do not understand what is going on, they still miss out on the non verbal communication queues.

The Solution To The Communication With Remote Team Members: Equality of Medium

The answer to the communication problem as we experienced, was normalcy of communication methods. To ensure that all team members had exactly the same communication bandwidth, we forced nearly all conversations to happen over the phone, and through remote communication tools such as WebEx's Training Center. 

The initial plan for the program was to run the first week as a full time training on the phone, using WebEx Training Center as the lecture and interaction tool, and then get into a "War Room" to facilitate team member interaction. After that first week, we realized that although talking to people face to face was easy, talking to them on a headset and sharing a screen, was not that much harder, and it included everyone. It took a few days for team members who usually don't communicate over a headset to get used to the process, but once they did, most team members did not mind the approach. All of our lectures, and group sessions were facilitated through this medium, and we encouraged our team members to use the format when solving problems in small groups. 

At the end of the program, we got overwhelming positive feedback about the usefulness of the medium, and the personal approach it provided. Remote team members loved it, and team members who usually do not deal with remote team members learned a new approach to communication, and also expressed a positive experience from it.

The approach also allowed us to use WebEx Training Center to record all training sessions and archive them for future viewing. We now have a catalogue of all theory sessions recorded during the Top Gun GUI training program for future viewing by team members interested in automated GUI testing. 

All in all, I used to hate on remote training tools. But after this round of Top Gun GUI, I love them. 

Image result for headset clipart

Tuesday, March 22, 2016

My Talk @ Motor City Testers!

Hey Ya'll,

I'll keep this short and sweet. Tomorrow (March 23, 2016), I'll be speaking at the Motor City Testers meetup in Detroit, MI. As part of the talk, I'm going to cover my experience in building an automated test suite from scratch.

Motor City Testers

Summary
Welp, there are three things you need to remember:

1.Convince your team of automation in phases, communicating wins along the way

2. Ensure to have a training mechanism of some sort, to bring people up to speed to your technological approach.

3. Ensure as folks come up to speed, you indoctrinate them in a community. This part is critical. Eventually, the automation you build will be so massive, that everyone will need to lend a hand in maintaining and growing it...and what better way to do this, then to empower them from the beginning!

Example Materials

At the talk, I'm also going to give some examples of tools we used in our team to accomplish the above. I'm attaching a copy of a syllabus I designed for our "top gun" training sessions. A copy of the syllabus can be found @ https://github.com/mkonkolowicz/ExampleMaterials

As for the more technical folks, we use a sweet Coded UI helper framework designed by one of our past senior devs. If you are interested in it, go get it either from GitHub (https://github.com/spelltwister/CodedUIFluentExtensions) or on Nuget (if you use Visual Studio as your IDE). The nuget name is the same as the GitHub repository. Additionally, the author (Mike Pavlak) tweets under the name @MPavlak12 so please give him some love!

Thursday, March 3, 2016

Influencing Unit Testing As A Quality Champion: How To Get Started

So recently I started working on a project which required me to write unit tests as my developer is working on the project. At first, I was a little intimidated as previous attempts at unit testing the application, did not turn out very well. Our team either created unit tests, but due to the way we've implemented some frameworks, they were not run...or they were too difficult to create and the team decided it was not worth the effort. Recently, I've decided that I need to take up the charge in creating fast easy to run unit tests, and that's what I've been trying to do.

Working with an application which was not unit test friendly, I decided that I needed small wins, one at a time and needed to get all of our devs on board. So to accomplish that, I decided I needed examples, that I created myself.

Creating Examples...Challenges Encountered

I quickly realized that I really needed to accomplish two things. First I needed to create tests that are easy to be used as examples. Second I needed them to be run.

Our unit tests need to use the MS Fakes framework. This is due to the nature of our application (or specifically our current implementation). I know this is very yesterday, and not recommended. But the way I see it, any unit test is better than no unit test, and once unit tests are written, we can start pulling pieces of our application apart without the fear of huge failure, due to our unit test net. But the problem with our business layer is that it is really woven together, and we could not pull fakes of individual projects out. So when we first tried to put our unit tests together, there was a lot of push back with respect to the additional build time, and the initiative failed.

About a month ago, one of our devs sent an email to the team which talked about how to isolate fakes to only the needed assemblies. This was very promising and I decided to try it. The approach basically calls for limiting fakes generation to a selected namespace. To do this, you will need to modify the generated fake file as below. By providing this limitation, you do not get access to all the possible namespaces in the project, but you basically get what you specify. This approach, has cut down on the build time which usually hinders our effort to implement unit tests, that are easy to run. So this is how I was able to accomplish tests that are easy (and quick) to run. So far, my tests have not taken more than a couple minutes to compile. Now you may think that is a long time, but really, it is not significantly longer than without unit tests...which is most important.



 Thanks to my developer buddy for pointing this out!

Now how effective are unit tests that don't run? Not effective at all, that's how effective! I decided I needed help from our dev ops team, and another developer buddy to help me attach them to a continuous integration build, or to create a build gate. I figured, meh, at least if someone checks something in, my tests will ensure that our stuff is not broken! So I attempted to attach the tests to a build, and realized that it was a lot harder than I thought. But, with the help of the guys mentioned above I am almost there. As of today, we have a test build setup with TFS 2015 Update 1, and it's "Vnext" build agents, which is utilizing build gates based on unit tests. Now these aren't my tests, and the build agent isn't hooked up to our production TFS servers, but I am confident that these steps are the beginning of a long fruitful relationship with unit tests, which will definitely improve our quality.

My future plan is to create a build definition in our production TFS , which will utilize the ability to use unit tests as a gate, and be the first team to do this, as an example for all of our devs.

My next challenge...to figure out how to measure our impact...Spoiler Alert: SonarQube :)