Wednesday, November 1, 2017

Tech Dad Manifesto

On October 3rd, 2017 at 7:13am, I became a Dad. I was preparing for that moment for a long time, but I truly had no idea what the moment would bring. The minute my daughter Victoria arrived, my life changed from being the best me to be the best Dad. Immediately, I started thinking about new improvements to my personal and family process, which would positively impact Victoria and Brandi my wife. I blazed through books such as “Dude, You’re A Dad” and “Dad Jokes”, and believe it or not, even started investigating purchasing a minivan (GASP!). I was away from the office for about a month, and in that timeframe, got used to being #1Dad (I even have a plaque). I cooked, I cleaned, I shopped, and I did a lot of baby snuggling. The time I spent with my new family addition was awesome. 

Unimpressed Baby Is Unimpressed


Then reality set in, and I realized, that in order to continue being #1Dad, I would have to elevate my game at work, both in terms of process and delivery. Now even though our tech team alone added 11 new little humans to our families in the last year (2017), I understand that not all of us are caring for little ones. But I am confident, that all of us on the tech team, struggle with elevating our game at work while having enough time to care for or participate in the things we love.
Whether its kids, puppies, or video games, the struggle of maintaining a balance of consistent high performance and time spent on the things or people you care about outside of work is real! After spending a month at home with my daughter and wife, I realized I needed to tighten up my process to fully utilize every possible second of time toward the things which would provide the biggest gain in personal process efficiency. The below statement of action is what I challenged myself to follow:

The Tech Dad Manifesto

Be Purposeful In All Activities
I refuse to do things without an agreed-upon purpose. Activities that I cannot attribute to a predetermined goal do not lead to my teams or my own long-term success. Goals are established before the tasks are initiated. Goals are prioritized based on some sort of needs analysis. I understand that when finished, some goals may not turn out to be impactful. It is ok to only know what you know, and try and establish the best course of action with the information at hand. Ignore the noise.


Be Disciplined
I will eliminate all unnecessary distractions. Whether it’s closing browser windows, turning off my phone, or aggregating team member questions to a pre-determined “question time”. My time is precious, and I have to ensure it is best used. I will politely exit conversations or meetings which do not have a purpose that relates to my goal or task. I am the master of my own time, and while I understand that I am not perfect, I will strive to improve my time discipline every single day. Every second counts.

Be Honest
I will give and receive feedback immediately and constructively. When I see an opportunity for improvement, I will voice my opinion, even if it uncomfortable to do so. I will voice feedback with the intent of improvement and care. I will change my behaviour to ensure feedback is received as constructive, and not belittling or passive aggressive. When I receive feedback, I will take action on it, and ensure the feedback giver is treated with appreciation, even if the feedback is hard to hear. Do the right thing.



Be Comfortable With Being Uncomfortable
As a technologist, I understand the pace of our industry is very fast. I understand that in order to keep up, I have to always be learning and refactoring. I will not apologize for not being the master of all domains. I will be comfortable, with the fact that I need to keep learning, advancing, and that I may not remember all the things I have learned in the past. I will always strive to learn new things, document, implement and share my learnings. I will not be disheartened by the daunting task of implementing my big dreams. I will not become complacent in my thirst for knowledge and I will not give up. Obsessed with finding a better way.


Be A Team Player
I will believe in my team and contribute my ideas to their process. I will be engaged, and make clear expectations of my team members to be as well. I will not stand for disengagement and will seek to understand the reason behind its occurrence.  I will lean on my team and expect they lean on me in their time of need. I will foster and rely on relationships throughout the team, as a way of solving hard problems. We are the “they”.



Balance

Above all else, I will strive to achieve a balance between what I deem most important, and the activities to support said thing. I will not spend all my time on one while sacrificing the other, but I will not use the first as an excuse for not performing at the other. I will be purposeful, disciplined, honest, uncomfortable and a team player. Due to this approach, I will be able to provide for my family, while being the best geek I can be. That is how I will remain #1Dad and a top contributor on my tech team.

My 2017 Gilbie :)

WIFM (What's In It For Me)

I know, not everyone is #1Dad, or #1Mom…or plans to be, or even wants to be. But I am confident that every one of us wants more time to do the things we love, and most of us want to be awesome at what we do.

The above tenants were individually derived from the experience of the Crusaders team, and they have worked really well for us to date. I am a firm believer that they are highly reproducible for each and every one of us, and while not easy to perfect, will improve your ability to strike a balance between doing the things you love, and the thing that allows you to do those things. Ask yourself, what do you love to do? What is the reason you push yourself to be better every day? 

For me, that reason was re-defined on October 3rd, 2017 at 7:13 am.


Mission in life: #1DAD

Wednesday, September 20, 2017

QL Tech Con 17 Resources

Hey Ya'll!

It's QLTECHCON17 TIME!!!!!




In support of my talk at QLTechCon17, I'm posting a few resources that I thought you'd find useful:

Presentation Resources

1. My presentation
2. My twitter handle
3. The best technology team career site

Tool Examples: Orchestrators

4. TFS
5. Visual Studio Team Services
6. Jenkins
7. Git Test

Tool Examples: Execution Environments

8. Xamarin Test Cloud
9. Sauce Labs
10. Selenium Grid

Monday, August 28, 2017

Automation Delivery Pipelines






Hey, Friends! Yes, I know, it's been a while. It's been a busy summer for me. Vacations, family, and sailing kept summoning me to the outside world (priorities right?), which in the winter months seemed so dreary and uninviting.
But enough about my excuses. Let's talk about new ideas. In the closing stages of last year, I posted an idea about automation delivery channels and their importance. I outlined the idea, and why it is something we should care about. After a bit of thought, I would like to rebrand the idea a bit. From now on, I am going to refer to it as the  "Test Automation Pipeline". I think this verbiage speaks more to the topic as specific to integrating DevOps and testing. In this blog post I am going to expand on the earlier introduction and provide steps on how to make one!

TLDR;

A Test Automation Pipeline is a test execution delivery tool to enable fast reliable, scalable automated test execution. It should be created as early as possible in a test automation project. The pipeline consists of a test orchestrator and test execution environment. In order to setup the channel we will need to:

  1. Pick a test orchestrator and execution environment
  2. Hook up your test orchestrator to your execution environment
  3. Define your trigger strategy
  4. Define your execution steps 
  5. Educate your team

Test Automation Pipeline

Ah yes, the old 'Buzz Word Bingo' conundrum, "Synergy", "Efficiency", "Cloud Computing", etc. etc. etc. Is a Test Automation Pipeline one of these generic terms? Is the concept worth remembering? When it comes to test automation, you have to judge for yourself by answering a few questions which relate to your (or your team's) testing maturity.
  1. Can you (or any member of your team) run a single automated test whenever you please?
  2. Can you (or any member of your team) run a hundred automated tests whenever you please?
  3. Can you (or any member of your team) run a hundred tests, on different environments whenever you please? (ie. operating systems, browsers, devices, etc.) 
  4. Can you (or any member of your team) run a hundred tests, in different environments, and receive results fast enough to act on them?
  5. Can you (or any member of your team) repeat #4 as often as necessary?
PUSH BUTTON GET TEST!

If you've answered yes to question #5, then you should stop reading. Because you have achieved a high enough level of maturity, that the rest of this blog post will not teach you much. Go ahead, go do something else :). 

But if you are like the majority of the teams I've dealt with, your answers trailed off somewhere around Question 1 or 2, and I think I know why. Most teams dedicate a lot of time to test automation, specifically writing scripts. Transitioning to a high level of test automation maturity takes a lot of effort. A team (or an individual) needs to learn how to write scripts, run scripts, debug scripts, determine what is automatable, decide what to automate, communicate results. There's a lot of stuff to do!

I've been through this cycle myself. Our team went all in on test automation, we absolutely needed to. Our manual test efforts just could not keep up with the growing complexity of our application, we knew we needed to automate away as much of the repetitive checks which we found ourselves constantly performing. So we started designing a solution for implementing test automation. 

We decided that for the automation effort to be successful, the end result had to be able to be scalable, flexible, reliable and easy to execute...In other words, "Push Button, Get Test". To achieve the last mentioned goals, we decided that first, we needed a way to run the test automation that we would later build.We needed a way for anyone who wanted to, execute test automation, and get results to check their work quickly. At Title Source, we truly believe in cross functionality in our teams. A test suite can be executed automatically by a nightly trigger early in the morning, then by a Business Analyst who wants to check regression results of a specific test case, and finally by a developer implementing a new feature in the PM (likely in the late PM, since Devs are vampires and love the night time :) JK). Before writing any meaningful test automation code, we implemented a test automation pipeline. 


Steps To Implementing A Test Automation Pipeline 


1. Pick A Test Orchestrator And Execution Environment


Oh right, a test orchestrator...and and execution environment...of course! Wait, what are those? Were those thoughts that ran through your head? Let me enlighten you. 

1.1 Picking A Test Orchestrator
A test orchestrator is a device which will automatically organize and execute your tests. A test orchestrator does not have to (but can) have a single responsibility. A test orchestrator is absolutely necessary because it is responsible for organizing how, and where your tests are executed. It is also the device which receives feedback from your tests and organizes them in a digestible way. Examples of popular test orchestrators include the "Test Controller" which is used in Microsoft's legacy "Test Manager/Lab Manager" testing infrastructure (See helpful link #1 below), Microsoft's Team Foundation Server (or Visual Studio Team Services) and The Jenkins Build System. The last two mentioned products are great examples of how you can piggyback to an existing system, for testing purposes. The choice of which orchestrator to utilize for your team's (or your own) testing purposes is highly team specific. All three are great choices but your decision should be swayed by a few factors. 




A. Can I PiggyBack On To An Existing Product?
IE. is TFS already implemented in my environment? Does my team rely on Jenkins for building code? If the answer is yes, I would say that you will encounter the least friction in implementation and adoption, if you stick to what you have.



B. I have Money To Spend
If you do not have an already existing solution, do you have a budget? If you do not have limits, you can go all out and hire a consultant to implement the latest and greatest build system for you, which that consultant will then hook up for you and you can simply start writing tests to be executed in. I would argue that the majority of us operate in a world where the value you show, pays for the things you want to play with. In this situation, my choice of orchestrator would be VSTS (the Azure based TFS). Some will call me a Microsoft fan boy, but I have to say, the way that MS (Microsoft) has iterated this product from a testing perspective has been nothing short of amazing. VSTS has built up an impressive set of features for scaling, reporting and coordinating test executions. They (The VSTS Product Team) respond to problems via a GitHub and User Voice page and most importantly, operate on a 2-week release cycle. This ensures a stable, reliable flow of improvements. VSTS (and TFS) provides an easy to use test execution method, and out of the box dashboarding for test results. It is my choice for test orchestration. The VSTS feature schedule can be found in helpful link #3 below.

The VSTS Feature Timeline Is Public! So Cool!

C. I Do Not Have Money To Spend. Like None. I'm Broke.
If you do not have an already existing solution, and you do not have money to spend, a great test orchestrator is Jenkins. It's free as in speech. While you still will have to set it up and have a machine to run from, your cost is your time and not a credit card. Jenkins operates on a plugin model, so it's really great for hooking up to test suites which are being executed in environments which are not traditionally covered by pay for models. I found it very popular with teams developing mobile products, and requiring tests to be run on many devices.

Jenkins As A Test Orchestrator


There is no right or wrong choice. You have to decide what works best for your scenario. Look out for upcoming blog posts which provide examples of test environment hook ups to different orchestrators!

1.2 Picking A Test Execution Environment
A test execution environment is the set of machines or devices where your tests will run. It is the receiving end of the test automation pipeline. The test execution environment receives tests from the test orchestrator, runs them and gives feedback to the test orchestrator about the test runs. The test environment should consist of many machines (which could be mobile devices too!) that will be able to run tests in parallel. The choice of which test execution environment depends on your team's needs. Arguably, this is the most custom decision in the test automation pipeline process. Does your team support a product hosted on a website? Does your team support a mobile product? Does your team support a desktop based product? The answers to the last questions will shape your approach. In our case, it was yes to all three. So we went with loosely coupled virtual machines for our desktop product, Sauce Labs for externally facing applications, an internally hosted Selenium Grid for our web product, and Xamarin Test Cloud for our mobile products. We chose these products based on a combination of what we needed to test, existing infrastructure limitations, and future proofing. The general direction from our perspective was to offload as much maintenance of the environment to external vendors so we could focus on our core competency: writing proper tests which can be scaled. The requirements we had for environments included speed, consistency, and repeatability. Each of the environments we picked could be initialized and destroyed programmatically (via the test orchestrator or setup of the test script) and could be scaled to allow parallel test execution to achieve as quick of a feedback loop as we wanted to pay for. The vendors we chose guaranteed environmental consistency. Ironically, the environment we have the most problems with is our loosely coupled VM environment. Because we are not experts at maintaining the machines and their states, we see different performance and different network conditions, which provide environmental variability resulting in variable test results. The choice of test execution environment depends on what your team needs. Always ask yourself if the environment you choose will allow you to test your product(s) in a scalable, predictable way, which will provide your team quick feedback.






2. Hook Up Your Test Orchestrator And Execution Environment

Once you pick your test orchestrator and execution environment, you will need to hook them up. This step is mostly technical and very environment and orchestrator specific. You will see examples of this step in following blog posts which focus on individual examples. Having said that, the one thing which you absolutely cannot assume is communication between the execution environment and the test orchestrator. This is something which has to be confirmed as early as possible. In our environment, this is one of the larger hurdles we had to get over, due to infrastructure setup. The worst part of setting up the test automation channel was waiting for network infrastructure adjustments to allow communication from the test orchestrator to the test environment. It would be in the best interest of the test automation effort to get over this hurdle with at least one machine or device in the execution environment.

Hook Up One Machine To Ensure Connectivity!

3. Define Your Trigger Strategy

Once you have decided your test orchestrator, execution environment and confirmed at least one device or machine in the execution environment can communicate with the test controller, it's time to define how often your tests will run. This decision is again specific to what your team needs, the execution time of your tests, and the cost for execution. In our scenario, we execute all of our tests on a nightly basis. This means that we run our entire regression suite overnight, and constantly ensure that our execution pass rate is relatively high. We notice dips in pass rate on a daily basis and investigate them right away. This strategy works for us but does have disadvantages. Namely, we are spending a lot of time investigating broken (usually due to test code bugs) tests. Your team may want to run subsets of tests, on a nightly basis, and full regression once per build. The frequency of running tests depends on what type of feedback loop your team requires. From my perspective, the more often you release the more feedback you require. So as my team moves toward daily pushes, we will ensure to increase the frequency of our test runs. It is imperative to determine what feedback loop works for you and set your test automation pipeline to trigger your test runs to provide it (feedback) accordingly. Triggering your test automation run is usually controlled by your test orchestrator. As with the hookup, I will not discuss the technical details of the hook up in this post, but leave it to the platform specific posts coming up soon. The most important thing to remember is you will have to align the trigger strategy with your release strategy to give feedback quickly enough for it to be relevant. Your team may want your tests (or subset) to run on every check-in, or only when triggered manually. You have to decide what rate of feedback works for you.

More Feedback, Through Continous Integration!

4. Define Your Execution Steps

We have a functional test execution environment with a functional test controller. We have a defined trigger strategy, we are ready to execute tests and read those beautiful reports. Now how do we do that? Just push the "get test" button and read the result right? Well, maybe not. Running test automation and receiving results may involve a bit more than that. Before we run tests, we have to think of any pre-requisite tasks that need to be executed. As with implementing the trigger strategy, this is fairly environment specific but worth talking about from a strategic perspective, since it takes some thought. One has to think about executing any pre-requisites necessary for a test run. It is best to assume that our environment will be barren, and we have to ensure anything which we will need to add for our product under test to work, will have to be copied in, installed, or injected in some other way. We also have to think about where our test code is coming from, and how it will be executed. In most situations that I've experienced, we pull test source code from a repository, and then compile it, before being able to execute it. Since we are treating our environments as brand new every time, we have to ensure that any operating system level alerts or messages will be handled. I like to do this through registry manipulation (on Windows). So as part of the steps executed prior to test execution, I copy in any registry settings which could sabotage my run. After compilation, we also have to tell our test orchestrator where to find our tests. This step will enable to tool actually running the tests (for example MSTest) to use proper binary files for execution. Finally, we will have to ensure that the test results are easily accessible via reports. This last piece is a feature which is very mature in some orchestrators (for example TFS/VSTS) and needs a plugin in others (for example Jenkins). Test result generation can also be handled by some execution environments, usually, ones which are sold by third party companies, like Sauce Labs or Xamarin Test Cloud (Microsoft). It is important that we have a clear way of sharing results of our tests.

Think Through Your Test Execution Map

5. Educate Your Team

The last piece in establishing a successful test automation pipeline is probably the hardest. One of the biggest potential gains of test automation is its' ability to be executed and examined by folks who did not write it. The idea that a Business Analyst or Developer could use tests written by a Test Engineer to ensure quality is delivered for a feature change, allows a team to eliminate bottle necks and increase delivery speed. But in order to do this, they (your team) needs to be educated on how to run tests and analyze results.

Every team is different, but here are some strategies that have worked well for our teams:

1. "How To" Blog Posts
If your team doesn't have an internal blog, you should really get one. It's a great way to spread information. We use ours extensively to share knowledge, spark conversations, ask questions, even show calendars of events! I have written blog posts with respect to how to trigger automation, how to examine automation, and how the automated tests work. It has saved me countless repetitions when asked questions by new folks on our teams and has proved to be a great source of knowledge sharing.

2. Word of Mouth
Specifically daily automation briefs at your standup meeting. This is a real easy thing to do if you are examining the status of your test automation pipeline and automated runs on a regular basis. Determine consistent verbiage with respect to the state, for example, "green", "yellow", "red" and communicate the state of the product based on the number of failures, and the state of the test automation pipeline.

3. Regular Summaries
My team receives a daily test automation summary, which I put together from the dashboards that the test orchestrator provides for me. This daily reminder is key in your teams' awareness with respect to the success rate of the latest test automation execution, and the readiness of the test automation pipeline. In this communication, I provide a summary of the last two metrics (success rates, readiness) and a summary of how to reach specific metrics which I aggregate in a summary. The last mentioned summary should be delivered by whichever communication method your team is most in tune with. In our case, it's email, but it could be via Slack, HipChat, or even a daily blog post. It is important to mention that although this summary is of the test execution results, it is necessary to mention any test automation pipeline outages. That last part provides wide distribution in case a particularly important channel has to be fixed by many people.




Obviously, there are other ways of educating your team that are effective. Lunch & learns, training, floggings, whatever works for you. Just kidding about the flogging. The key from my team's experience is that a consistent regular message is sent to a wide audience. The focus of the communication should be the status of the test delivery channel and there should be reference points that show step by step instructions on how to execute and analyze the tests which the test automation pipeline executes. 

Early setup of a test automation pipeline really helped my team focus on writing tests that could be reliably executed, and scaled. By designing and implementing the channel before focusing on producing a significant amount of tests, we ensured that when we did produce a significant amount of tests (thousands of them!), we were not worrying about why tests were not able to be executed or were not executing fast enough. We have followed the above-outlined channel design process multiple times, and each time found that it enabled us to focus on figuring out what to test, instead of how to run tests.

STRIVING FOR: PUSH BUTTON GET TEST!

In this post, we talked about what a test automation pipeline is, what it's value is, and how to set one up. We focused on the theory behind setting up the test automation pipeline. We talked about the specific pieces of setting up a channel (in theory). In the next two blog posts, we will look at specific examples of how to setup a test automation pipeline for different environments and test orchestrators.

Helpful Links

Thursday, March 30, 2017

Setting Up Calabash: Debugging Page Models

Debugging




Hey There! Thanks for visiting! This post is the 4th in the series focused on mobile automation, specifically Calabash. In the first two posts, we found out how to setup the Calabash framework for testing iOS and Android apps and why and how we should use the page object modeling pattern in our mobile tests. The third post dealt with setting an automation delivery channel via Jenkins and Xamarin Test Cloud.

This post is going to be focused on my process for debugging page models during test creation. We'll look at the Calabash console, and how to ensure your page model behaviors are performing the expected actions. We'll focus on an iOS page model, but the same principles can be applied to Android page models. If you need help setting up your page modeling project, or understand more about the page object modeling pattern for mobile testing, click on the links below to navigate to my previous posts!

TLDR;


  1. When starting to write a test suite, write the feature, then the step definitions then the page models
  2. Do not use and pre-canned steps or Calabash-Android or Calabash-iOS API steps in the step definitions
  3. Write iOS models first, enter accessibility IDs if possible. Copy structure of iOS page models to Android
  4. Debug page models by checking properties in Console (Android and iOS)
  5. Debug entire test by running locally, then on a different machine (ie. Xamarin Test Cloud)

Assumptions

1. This post assumes you have setup your project for page object modeling
2. This post assumes you have been able to create at least one-page model
3. This post assumes you have kept the tools outlined in the Environmental Pre-Reqs post below and can run Calabash tests.

Process For Writing Tests

First thing is first. Before we get into debugging, it is important to identify a good process for writing tests. This process is important to outline because it will identify pieces of the process that only have to be written once for both platforms.

1. Scenario (Cross Platform... ie. Write Once)

In Calabash page modeling, a scenario can be shared between iOS and Android. It specifies the business logic flow you are trying to test. A scenario is stored in the ".feature" file and controls the flow of the test for both platforms.


2. Step Definitions (Cross Platform ie. Write Once)

Step definitions match up with the scenario flow. This step identifies what specific actions are going to be taken. Notice, the step definitions do not use prebuilt in ruby steps, nor do they use API calls to the iOS or Android Calabash APIs. This is a very important detail, as it allows for true cross-platform re-use. 



3. Page Models (Platform Specific)

A page model implements the step definitions. This is where the fun starts. A platform specific implementation of an object uses the Calabash iOS or Calabash Android API to make calls specific to the platform. Because we've gone through steps 1 and 2, we can now focus on how to ensure the page model works appropriately for each platform, with very little re-work.


Page Model Composition 

A page model should be made of up two main sections. One for properties of the model, one for behaviors. The properties section defines how the elements of the page model are found, and the behaviors define what the page model can give a user access to. I find it easiest to initially identify the properties and behaviors which I am interested in for the immediate test, instead of trying to identify all properties and behaviors of a page. I think this works better because once I have the page model created and working for one property and behavior, it is easy to replicate for others as new test flows require. 

Page Model Property Identification

Each property on the page model has to be somehow found by the Calabash testing framework. I found it best to use the API filter "marked". As described in a Xamarin Calabash Query Syntax Example, this filter uses the accessibility ID property in iOS and the ID property in Android to filter elements. I believe that using accessibility IDs in iOS and IDs in Android is the best way to hook on to elements, but as described in the link I sent above, there are many other ways to filter (retrieve) elements. The key is to ensure you have the elements available for use. 

Debugging Page Model Properties and Behaviours

So now we know that we have to figure out what is available for us to use and then test that we are using it correctly. But how exactly do we do that? 

Enter, the console. The Calabash console was to me the most helpful debugging tool for iOS and Android. It can be started from a terminal window by running the command "Bundle Exec Calabash-IOS Console" or "Bundle Exec Calabash-Android Console". To launch your application, you will need to then run the command "start_test_server_in_background". Xamarin does a pretty great job of explaining how to use the functionality in their Calabash-Android Console Wiki Article and their Calabash-IOS Console Wiki Article.

The basic premise is this: The console will tell you in real time whether your property retrieval commands or behaviors work. 

Example 1: Check if you have access to a property
Let's say we wanted to see if the above property called "UILogin" was available for our use. IE. if called by the page model, it would be able to be acted on. In order to ensure we had the correct property, we would want to run a query on the application and ensure it retrieved the correct property. We would need to follow the below steps: 

Step 1: Launch Terminal
Step 2: Navigate to your project directory (ie. CD directory)
Step 3: Launch the Calabash console (let's assume iOS) using the command "bundle exec Calabash-iOS Console"
Step 4: Launch the test server by executing the command "launch_test_server_in_background", which for iOS would launch your default iOS simulator and then your application under test. 
Step 5: Ensure the screen that contains the property is on screen

Step 6: Run the command query "view UIButton marked:'UILogin'"

If our command returns the property, with its' available values, we have correctly identified the property! Otherwise, we should get a return value that specifies "false".  

We should repeat this step for all properties on a page model before we move on to behaviors.



Example 2: See if A Behaviour Works  
Similarly to how we need to check that a property can be retrieved, we need to check that the Calabash API functions we are calling, work within the behavior methods specified in our Page Model. The basic debugging flow is the same as for checking a property, but instead of focusing on ensuring our properties are available for action, we will now ensure that the actions we want to perform are correctly setup for the screen we are testing. 

Step 1: Launch Terminal
Step 2: Navigate to your project directory (ie. CD directory)
Step 3: Launch the Calabash console (let's assume iOS) using the command "bundle exec Calabash-iOS Console"
Step 4: Launch the test server by executing the command "launch_test_server_in_background", which for iOS would launch your default iOS simulator and then your application under test. 
Step 5: Ensure the screen that contains the behaviour is on screen

Step 6: Use the Calabash-iOS api to run the contents of a behaviour. For example if we wanted to see if a swipe with specific start and finish coordinates, based on the location of an element works: query "swipe :left, :query => "* marked:'UITutorial'", :offset => {:x => 100, :"swipe-delta" =>{:horizontal => {:dx=>500, :dy=>500}} }"

If our query command works, we should see the command's action (for example a long swipe) executed on the iOS simulator (or hooked up to the Android device). 

Debugging The Entire Flow (Local)

Once we've been able to debug the page model properties and behaviors, we should perform some full test flows. This can be accomplished by simple initiating the tests from the console by running the regular "bundle exec Calabash-ios" or "bundle exec Calabash-android" commands. I found that the majority of my debugging time was focused around figuring out how to find the correct properties and behaviors to use at the page model level. However, I did note that in some timing issues with respect to screen loads messed up my test flow when I ran it from start to finish. I found it very helpful to use the platform specific wait helpers (Calabash-iOS Wait Helpers, Calabash-Android Wait Helpers) to allow my test to keep up with the application flow. 

Final Test: Submit To Xamarin Test Cloud

Finally, after I got my tests to run fine on my machine, I submitted them to Xamarin Test Cloud, for a final check. After all, an automated test should run with the same results on multiple machines, right? Submission to Xamarin Test Cloud required a bit of customization from the base Xamarin guidance for Calabash test submissions to Xamarin Test Cloud. Specifically, I needed to specify which profile I was going to use (ie. Android or iOS)

An example of a command I used from the terminal:

 test-cloud submit /Path/To/AppUnderTest.ipa  mYaPpGuiDfRoMxTc --devices dEvIcEiDfRoMxTc --series "TestSeries" --locale "en_US" --user myUser@myOrganization.com --profile ios --config /location/of/cucumberyml/on/my/machine/cucumber.yml

I would then see the results listed in the terminal, of the run executing and call it a day if it passed!




So that's basically it that's how I debugged my tests. Easy right? ;) For most of the time during the debugging flow, I felt like the dude at the top of this post :)

Links




Wednesday, March 29, 2017

Setting Up Calabash: A Jenkins Build For Calabash Test Execution On Xamarin Test Cloud


Greetings and Welcome Back! 


In this third post for mobile tests, I will outline how to make a Jenkins build that submits pre-existing Calabash tests to Xamarin Test Cloud. The purpose of this post is to walk you through the process to provide an automated way of running your iOS and Android focused tests on Xamarin Test Cloud, on a nightly basis.



TLDR;

1. You need a working Jenkins server instance to run a Jenkins XTC Calabash Test Run

2. You will need an RBENV plugin configured on your Jenkins instance

3. You will need to specify the Ruby version as 2.3.1 (for now) in your Jenkins build, and setup a     Jenkins specific folder for RBENV on your Jenkins machine.

4. You will need to supply a custom bash script to specify what to supply to XTC (example below)

Pre-Reqs

You will need access to a functional Jenkins instance, and it helps to have administrator privileges, for the purpose of installing plugins. You will need to install the RBENV plugin before creating the build.  You will also need to create a new "Freestyle" Jenkins build.

Background

I decided to write this blog post due to the fact that I could not find any Xamarin Test Cloud guides to submitting Calabash tests to Xamarin Test Cloud using Jenkins. However, I used Jeffry's post as a starting point for setting up my build.

Setting Up A Jenkins Powered Calabash Test Run On Xamarin Test Cloud

General Section

Almost no changes from default. Give your build a good name that reflects the test app and the platform you are testing against.

Jenkins General Section

Source Code Management

You will need to specify your test code repository. In my case, it is a Git repository. If you do not see your source code repo types available, it is possible your Jenkins instance does not have the necessary plugin. You will need to add it to Jenkins if your screen does not look like the screenshot below.

Jenkins Source Code Management


Build Triggers

Nothing crazy once again. In this section, you can specify when your build kicks off. I have mine set for a nightly kick off at 3 am ish.

Jenkins Build Triggers


Build Environment

This section is very important. First, the "rbenv build wrapper" checkbox has to be checked. Second, in the advanced settings for the rbenv build wrapper, it is important to specify the current XTC Ruby version, which is outlined in this blog post. Third, you want to ensure that in the preinstall gem list, you specify bundler and rake.

Finally, you have to ensure that you have created a folder in your Jenkins' machine $HOME location, to be used for the Jenkins RBENV bits. This is a dedicated folder for RBENV bits for Jenkins, so you do not want to leave it as the default RBENV location on your Jenkins machine. If this folder is not present your build will fail.

Jenkins Build Environment

Build

This section is where the build script actually runs. As per Xamarin guidance, I've moved the build script to a separate bash script, and checked it into my test repository. So the screenshot below only shows the execution of the build script. Notice that I use an environment variable to CD into the script workspace on the Jenkins machine to execute the script.

Jenkins Build Section


Build Script

An example of my build script can be seen in the screenshot below, but there are a few sections worth writing about. First, it's imperative you specify where to pick up your .ipa or .apk from. I've specified a hard coded directory in my script ("APP_FILE") but it is possible to setup a Jenkins build variable to hook up your application's build output directory location.

Second, it is necessary to specify your test run details. The variables "TEST_SERIES", "LOCALE", "DEVICE_SET" control which tests you will be running, which locale you are running against and which devices you will be executing on. You also have to specify what your Xamarin Test Cloud user account is ("XTC_USER") and what your API key is ("API_KEY"). All of your Xamarin Test Cloud details can be obtained through initiating a manual test run on Xamarin Test Cloud. Details as to how to do that can be found toward the end of this article.

Finally, it is imperative to specify which platform specific pieces you need Calabash execution to load. This detail is specified through the Cucumber.yml and the profile flag. I set these two details through the definition of the "TEST_RUN_CONFIG" and "EXECUTION_PROFILE" variables.


Xamarin Test Cloud custom bash script


If you've done everything correctly, you will see that the build runs, and in the console output, pulls your code from the git repo, downloads and installs Ruby, then the gems, then starts XTC execution. You will see test cloud execution start in the Jenkins console output. It is indicated by appearance of the XTC process, including dependency verification, digest calculation, upload, validation and finally a run.

It is worth noting that after the build is executed, a link to the results is located in the build console output, which gives a really nice way to get to a nice XTC dashboard, if you are already examining Jenkins builds on a daily basis. Alternatively, Xamarin Test Cloud sends email notifications of executed test runs, so you can be notified that way too.

Xamarin Test Cloud notification


And that's it! If you are interested in a specific build log example, one is posted here, but aside from that, thanks for reading and happy testing!

Links

Setting Up A Xamarin Build On Jenkins
Submitting Tests To Xamarin Test Cloud
Example Build Log

Monday, February 27, 2017

Setting Up Calabash: Page Modelling For Mobile Tests

Welcome back, this is the second post in the series dealing with setting up Calabash. The first post dealt with setting up your environment in preparation for writing the tests. This post will deal with setting up your test solution for page modeling, and some lessons I learned from my first-page modeling driven test.

TLDR

1. Page modeling comes from Selenium but is a pattern used to model the behaviour of pages to code objects.
2. A specific Calabash directory structure has to exist for an XPlat (cross platform) page modeling approach to a test suite.
3. Cucumber.yaml controls the dependencies and platform specific flow of execution
4. Feature files contain plain language actions which can map to acceptance criteria of a story
5. Shared step files contain code that acts as an interface of how the plain language steps should be executed
6. Page models contain platform specific (iOS, Android) implementation of properties and methods used in shared steps. 
7. It's easier to write iOS tests first, but we need to stick to IDs as automation hooks and not break the page modeling pattern.
8. If we follow the page modeling approach, we can implement tests for the second platform in about 1/3 of the time of the first!

Page Modelling: What Is It And Why Should I Use It?


The page object modeling pattern originates from Selenium and has been widely adopted in GUI (Graphical User Interface) automation. A link to the original Selenium documentation can be found in the links section below. Page Object Modelling (POM) uses an object driven design to describe the behaviors of the page or screen in question. This type of design separates the test from the implementation and allows easier maintenance and higher reusability of code. This last point becomes very evident in the mobile space. Once written, two-thirds of a test which runs against one platform (ex. iOS) can be reused for another platform (ex. Android).

How Do I Set Up My Solution For Page Modelling and XPlat Execution?

The default calabash feature generation command ("Calabash-iOS gen") should set up your directory structure, without anticipating page modeling or cross platform (XPlat) execution. This means that you should end up with something that looks like below.
Default generated Calabash-iOS directory
But to prep our solution for XPlat (Cross platform) execution, we need to add a few things. Remember, the goal of page modeling is to reduce maintenance and increase code re-use.  We will need some shared elements and some that are platform specific.
  

Step 1: Generate An Appropriate Directory Structure

The XPlat page modeling approach in Calabash assumes that both iOS and Android tests execute a very similar test flow, and so share features and step definitions. You will see that any ".feature" or "_steps.rb" files will not be defined in platform specific folders. 

Your directory structure should follow the following breakdown
  • Features
    • android
      • pages
        • AndroidPageModel.rb
      • support
        • app_life_cycle_hooks.rb
    • ios
      • pages
        • iOSPageModel.rb
      • support
        • 01_launch.rb
    • step_definitions
      • shared_Steps.rb
    • support
    • features.feature
My test solution layout

Test Execution Flow

Upon executing a test command (for example "Bundle exec cucumber -p ios"), cucumber decides which pieces of the solution are necessary for inclusion, and then which test platform to execute against. We will now take a look at which pieces in the solution (directory) structure control this flow.


Important Test Execution Flow Concept: Cucumber.yml

Scope: Shared between iOS and Android
Purpose: Define which files to include in test flow execution
Details: This file uses the "-r" parameter to include files and directories in the compilation. It is important to include only the platform specific files for test flow execution. Ex. Do not include any "ios" specific pages in the android profile and vice versa. 



Important Test Execution Flow Concept: Feature File (X.feature)

Scope: Shared between iOS and Android
Purpose: Define feature flow in domain specific language. 
Details: This file reads in plain English, but needs to match up to step definitions. Each one of the scenario defined steps need to be matched to a step definition.


Important Test Execution Flow Concept: Shared Steps Definition

Scope: Shared between iOS and Android
Purpose: Define what each step means. Acts as an interface, which will later have to be implemented by the page models (individually on iOS and Android).
Details: The shared steps file acts as an interface which defines how the behaviors necessary to execute the actions from the feature file will be executed. For example, carrying out the action "Given the app has launched to the tutorial" means waiting up to 30s for the TutorialPageModel to load. 

NOTE: In order to maintain separation of concern with respect to platform specific implementation, all steps in the shared steps have to be custom written steps. A step definition file which is used for an XPlat approach cannot contain platform specific canned calabash steps.  



Important Test Execution Flow Concept: Page Model Files

Scope: Specific to iOS and Android
Purpose: Implement the platform specific actions that are defined in the shared steps.
Details: Each page object file carries out the actions that the shared steps defined through an implementation of methods. The page object files take advantage of the Calabash API (one for iOS and one for Android), to carry out the actions defined in the shared steps. The page object files need to inherit from their respective base classes (ABase, IBase) in order to be recognized as page objects.



Step 2: Write Shared Features

Shared features provide a plain English definition of how the application feature should behave. These steps have to be written in the "Given When Then" format, in order to define what the expected result of an action should be. Shared features should be able to be written very early in the lifecycle of the tests, and even the application under test. My team has seen success in educating the entire team, with respect to how to write proper acceptance criteria, for features in a "Given When Then" format, since those criteria translate very well into feature files in Calabash. The Agile Alliance has a great write up on using the "Given When Then" format for acceptance criteria to which a link is included in the links section. Additionally, since these criteria are written in plain English, any member of the team who is familiar with the purpose of the feature, can write them.

Step 3: Write Shared Step Definitions

As a reminder, shared step definitions provide an interface of what it means to carry out the actions necessary for the behaviors defined in the feature definition. This means that some code is written into this file, and it is not 100% readable, but at the same time, the code implemented here is still not platform specific. A shared step definition used for XPlat Page Object Modelled test suites requires the use of all custom steps, to ensure that no platform-specific steps that cannot be shared between both implementations exist. 

Step 4: Implement Step Definitions Through Page Models on iOS

During compilation of my test suite, I chose to implement iOS tests simply because a few of my friends from the team had already compiled tests for Android.

Choosing this approach, turned out to be a good one, as it saved me a bit of upfront work. I knew going into this project that I wanted to stick to the approach of using IDs for both platforms (iOS and Android) as automation hooks. For the iOS platform, Calabash uses the "AccessibilityID" property as the default automation hook. Now I don't know about your app, but mine did not contain many accessibility ids.

So after wrestling with XCode a bit, I was able to add them to my application under test and implement a pattern in my page objects of using the id field as the default automation hook. Because of this approach, in general, I was able to use the API call which looks for "marks" on properties retrieved by Calabash. This API call (documented here and referenced in the links section under the API Query Syntax) filters objects by ids, contentDescription or text. Simply, it looks for a wide variety of hooks and given I ensured to enter IDs on iOS it worked for that platform. The beautiful thing was that on Android because most IDs are essential to the application flow, I was able to maintain very similar page models, and just change the IDs!




IMPORTANT NOTE: Execution on iOS Requires Linking of Calabash To A Debug Version Of The Application Under Test!


I will mention this in a subsequent post about debugging and actual execution of the tests, but it is worth noting that before you can actually run Calabash-iOS tests, you will need to link the Calabash framework to your application. This is necessary because, at its essence, Calabash is essentially a person in the middle which is allowed to make calls to the application under test through explicit permissions. A tutorial on how to link iOS Calabash to your application can be found here and in the links section below. Without doing this, the tests will not run. 

Step 5: Implement Step Definitions Through Page Models on Android

In Closing...


Implementing the iOS page models (and steps, and features), debugging and testing them locally and on Xamarin Test Cloud, took about a week. 

"Re-implementing" the entire flow for Android...1 day. Mind == Blown. This process really sold me on the value of using proper automation hooks (ids) and page modeling. I know, I know, it doesn't seem possible. There's only one way to experience this joy...try it! 


Links