QA isn’t sexy - it never has been and it never will be. It’s the unrewarded and overlooked side of software development that is the go-to blame group when things go south. Generally speaking, people get hyped up by the flashiness of mobile design, by uses cases, and — of course — by revenue opportunities. To many companies, QA testing is much like HR. You know you need to have a team, but because you perceive it as not being able to produce a true trackable ROI, you often dismiss QA and mobile testing and classify it as another “cost of doing business”.
What most people in business fail to understand is that QA testing may very well be the MOST important process in the cycle of software development. Let that information sink in for a minute. I said the most important part. Why? Because it doesn’t matter how cool and sexy your design looks like — if the functionality you just released to production is broken, the customer will be pissed and may never return to your app. Similarly, you can have the coolest, most usable, most beautiful and intuitive app in the world that gives the user an infinite amount of value, but if a page takes 12 seconds to load, nobody's going to use it unless they have to - and they'll use it grudgingly at best.
Novels can be written about customer dissatisfaction with mobile app performance. There's been so much data collected, all clearly showing that customers who are unhappy with mobile app performance will abandon your app and potentially never return. 51% of shoppers claim to have abandoned a page because of slow load times. 85% of mobile users expect mobile apps to perform as fast or faster (!) than desktop websites. And where 64% of all mobile users expect that mobile pages will load in less than 4 seconds, only 2% of the top 100 eCommerce apps pass that test 99% of the time (source).
Despite all these data points, guess what? We 've gone from bad to worse over the years. In 2011 the average mobile page served to the end user was 390 KB in size. Do you want to guess what it is today? 1180 kb (source). So despite the fact that mobile users have become more and more dissatisfied with mobile app performance over time, we keep making mobile pages heavier and heavier, negatively impacting the mobile user experience of our customers.
Most, if not all, of these issues could be fixed or at least ameliorated with great QA strategies. If companies spent the time needed creating a robust mobile app testing strategy, we wouldn’t be at a state of affairs in which 46% of mobile customers are completely dissatisfied with app performance, or state 99% of performance issues are tied to easily fixable User Interface components that simply load too slowly. (source)
If you want to see how badly websites and mobile apps have been performing, just take a look at this Google tool (which measures the performance of any page that’s not hidden behind a login) and type in the URL of the website of your choice to see how it performs. (source). Another fun exercise is to type in google.com itself. You'll see that even their mobile website only gets a meager 53 out of 100 performance score — and you'll see why.
The point I'm trying to make is that we are where we are today because companies do not generally establish the systematic QA practices that would prevent these issues from happening. In this article we will briefly cover each major component of a great testing strategy. The article is divided into two sections: First we cover general testing practices, of which mobile is a subset. These strategies apply to any software testing out there, including mobile apps and mobile websites. The second section focuses primarily on mobile specific testing challenges and best practices. Ideally, every company will devise testing plans – both manual and automated – that will account for all of the steps in this article. Doing so will ensure that your final mobile product is in line with your mobile users' expectations.
General software testing strategies
1. The QA tester is part of your core team, and is engaged in the entire software development cycle
If you look at the software development process as a sequence of events, the QA portion of the cycle is naturally at or near the end. In traditional waterfall companies, testers are typically engaged within weeks of starting the testing stage of the project. In agile projects, the QA resource is appropriately part of the core team, as that is critical to the overall success of the program. The tester must be a part of the project cycle from the requirements / ideation phase all the way to the release of each new feature. By setting things up in this way, the resource will understand what s/he will be testing up front, and will know exactly what to expect from both a requirements point of view and a user experience / design point of view. For a brief comparison between waterfall and agile methodologies, you can read one of our previous articles here. (source)
2. Define test scripts from the get go (requirements phase)
Test scripts or test cases are critical to the overall success of any QA strategy. Software projects run by inexperienced managers have very loose guidelines for software testing. In a previous life I once heard a vice president ask, “What so difficult about QA testing? Just make sure everything works.” This is a perfect example of a complete lack of understanding of testing and how it needs to be done. You can’t simply "test everything” — it's not the way things are done As you’re improving a mobile flow or releasing a new piece of functionality, you need to provide clear documentation to the QA tester to ensure testing is done in a logical and systematic manner. It is typically the responsibility of the product team to ensure that the test script is clearly documented and understood by the tester / testing team. The template should be simple, and should be followed by anyone doing software development. I will illustrate this with an example.
3. Unit testing
Unit testing is often done by developers, but sometimes QA engineers participate in this activity as well. Unit testing typically refers to the practice of testing various functions of the code that is being developed. A unit is nothing more than the smallest testable part of an application. A unit can be a function, a module or a class that is being tested. Though many companies have started to invest in automated unit testing tools, at this point the manual process remains the norm. For a great introduction to unit testing and the benefits of the process, you can read this article. (source)
4. Functional Testing
Functional testing tries to answer two basic questions: "Can a user complete a task?" and "Does this feature really work?” A tester will go through the flow that is being changed to ensure that the user can go back and forth through all the steps in the particular flow that is being built. The goal is to ensure that — from a purely functional point of view — all of the steps work as expected and there are no flaws in the process. For example, ending up on a mobile page without any way to return to either the previous page or to exit the flow is considered a defect in functional testing. For an in-depth analysis of functional testing, you can read this article (source)
5. UX/Comps Testing
A critical step in software testing is to ensure that the user experience that was built by the UXA and the designer matches its final implementation. I cannot recall a single feature that I've managed throughout my career which passed this test right off the bat. For various reasons (some intentional, some not), developers will frequently fail to implement the functionality in the exact way that the designers outlined. In many cases it takes highly-skilled QA testers to catch these discrepancies. In my experience, the variations between comps and the working software are fairly small, but still noticeable: They may be different fonts between visuals and the final code, different styles, padding between lines that is either too small or too large, error handling that doesn’t match the designs, incorrect copy, etc. An experienced tester must have an eye for this type of detail and must be able to catch these issues immediately.
6. Performance Testing
The simplest rule of performance testing is this: With the addition of a feature, you want to make sure that you haven’t added any delay in the overall performance of the flow that is being changed. In the context of mobile apps, what you are monitoring is the overall speed and responsiveness of the app when a new feature is added. You are comparing these metrics against the current production flow (before/after feature performance analysis) and determining whether the new feature has any impact on the overall device and the battery performance of the device. (more on performance testing here)
7. Load Testing
Load testing is huge, especially for popular software – apps and websites alike. Ultimately what you want to test— typically through stress testing automated scripts— is the breaking point of an application. In other words, how many concurrent users will it take to crash your app or make it unresponsive. Every time Black Friday comes and you’re on an app or website that isn’t working properly, it's a sure sign that a QA team didn’t do a good job of load testing to make sure that their servers could withstand high-volume traffic. In addition to testing the breaking points of an app, you want to test what it would take before the app starts to slow down (when it can still be used but a performance lag is visible). For example, some e-commerce mobile apps become a lot more difficult to manage after adding a certain amount of items to a cart. That might be a totally acceptable thing from a business point of view, but a tester must do their due diligence and document the conditions under which an app will gradually slow down. (More on best practices for load testing here)
8. Regression testing
You know how in movies involving time travel, some dude goes back to the past and does something stupid and apparently insignificant, only to return to the present and see that the planet is ruled by robots or something like that? That’s regression testing for you. Even the smallest code change can have unintended consequences for the overall application, often in ways you wouldn’t have dreamed possible until you actually see it in action. Regression testing makes sure that where a small change is made, the overall flow still works as expected once the development of the feature is completed. As an example, last week I was supposed to test a certain page in checkout where a minor change had been made.When I tried to enter the flow by clicking on the checkout button, the call to action button no longer worked. Even if the change in question was 3 levels deep into the flow, somehow in the development process the entire flow got broken. That’s why regression testing is critical – to ensure that the end-to-end flow that is being touched by a developer still works after a change has been made. (More on regression testing here)
9. Manual vs Automated Testing
I mentioned briefly that unit testing can be automated, and the reality is that various types of QA tests can and should be fully automated. Unit testing, functional testing, load testing, performance testing, smoke testing, regression testing – all of these test types can be fully automated, and it’s great. A computer won't recognize if there’s a discrepancy between a designer’s comps and the final experience built by a developer, but it can easily test and document anything in the flow that is not working as expected. When a developer is done making a change they can easily run the testing script overnight and return to work to see a list of issues that the testing software identified. Personally, I strongly recommend doing research and choosing an automated test solution. It saves time and money, it improves accuracy and it can help developers identify and fix issues before they ever reach the QA phase. Though certain QA tasks will never be replaced by a computer / software, there's an awful lot of work that can be done using automated testing tools. There's a great article on the pros and cons of both manual and automated testing here.
10. User Acceptance Testing
User Acceptance Testing is something of a misnomer. That's because you do not usually have true end users testing the new feature. Instead you have a select group of users from within your company who have been tasked to participate in this process, test the new software change and (if everything goes well) sign off on the release. Here's a hint: It almost never goes off 100% without issues. And that’s OK! This step is usually the final stage in the feature lifecycle for any functionality built for websites and applications alike. Typically, a UAT strategy is planned well in advance and is part of the overall testing plan. Depending on the complexity of your software and the size of your company, different stakeholders / internal users will participate in the UAT for various features. Feature-specific UAT normally follows the same process: The planning of the session, drafting test cases, selecting UAT stakeholders, explaining to stakeholders what’s in-scope and out-of-scope for the UAT session, walking testers through the main scenarios, testing the actual feature, documenting bugs / issues, and finally signing off (also known as go/no-go decisions). If you want to read more about user testing and each step involved, this is a great introductory article. (Source)
So far we've covered various testing strategies that are common to all types of software products built, whether websites, mobile websites, native or hybrid apps or even wearable technology. Now let's quickly go through a set of mobile specific testing challenges and practices that are specific to smartphone testing.
Mobile-specific testing strategies
Mobile testing is difficult — it truly is, because there are so many different screen sizes, operating systems versions, network dependencies and other factors that can impact your overall test plan and timeline. Personally, I like how the difficulty of mobile app testing is captured on these two slides:
Let’s explore some of these issues in more detail.
11. Device Testing
You know what’s cool about building and testing websites? There are only two device types that you would consider testing for: Mac and Windows laptops/desktops. Sure, they have various resolutions that you need to account for, but those are certainly limited in scope. Do you know how many device types exist on Android alone? 24,000 as of 2013. This matrix from OpenSignal, shows you just how f*ed up the Android market is (at a global scale) when it comes to issues impacting proper QA testing. Basically, what you see below is a bird's eye view of all the different screen types out there on which Android is run.
Android devices" width="546" height="365" />
Android devices literally come in all shapes and forms, so deciding which devices and what screen sizes an application should be optimized for is critical to the overall success of the QA strategy. No matter what you do, there are likely to be some devices and some device sizes that simply won’t be supported, so making sure you have a clear company policy on what you're willing to support will ensure that testers follow the rules and test against all supported devices.
12. OS Version Testing and Support
What's annoying about mobile testing is not limited to the fact that the device market is incredibly fragmented. It's also that both iOS and Android have a significant number of operating system versions that exist in the market. Granted, iOS is significantly better than Android in this regard and has only 3 versions of the OS that are supported — which is one of the many reasons why (in a different article) we suggest that companies develop their first application on iOS (source). By contrast, Android has at least 7 different OS versions that are still on the market in the US alone. To be fair, since Apple is both the manufacturer and the software provider for the iPhone they have full control of the number of versions available to their users, where Android doesn’t have full control of these versions. Instead it relies on manufacturers to push out an update. From a testing perspective this is a total nightmare, and I'm not even talking about the manufacturers' versions of the operating system with their tweaks. This means that it is very important for your company to have a testing policy in place that clearly calls out which versions of Android / iOS are supported so that testers can check every new functionality in each of the operating systems.
13. Device testing vs emulator testing policy
Given the complexity of mobile testing, there's been an understandable surge in software-as-a-solution service options catering to streamlining QA testing. There is now an abundance of cloud-based emulators that QA testers can use to quickly test a certain functionality. Perhaps the most popular of these is browserstack.com, a subscription-based solution that a company can use to quickly and effectively test any mobile app. They started as an online simulator only, but as of 2016 Browser Stack testing is happening on actual devices as well. From a testing perspective you would expect QA engineers to test manually on 1/2/3 devices, and at least on the most popular version of Android and iOS. But a lot of testing, particularly for less-popular device types, screen sizes and iOS versions, should be done through an emulator to save time and money. For example, BrowserStack supports the following mobile types, and I’m sure many of their competitors do the same:
14. Carriers Testing & Network Connectivity testing
As we all know, various carriers provide different connectivity options to their customers: Anything from 1G to LTE. So one additional thing testers need to take into account is how the performance of a feature may differ from one carrier to the next. For example, app performance testing shouldn’t exclusively be conducted on WiFi. You need to specifically test your app on actual 3G, 4G and LTE mobile communication standards to see how the application performs. Ideally you should try to test it on at least a few of the giant carriers, such as Verizon, TMobile, ATT, and Sprint – you pick your poison. Additionally, you want to make sure that you test what happens when connectivity is lost or suddenly shifts from 4G to 1G. How does the application behave? What is the user expected to do?
15. Interrupt conditions
You would think that with the rise in popularity of smartphones and mobile applications, companies would have caught up on testing practices and business rules for interrupt conditions, but you’d be wrong. So what are interrupt conditions? Incoming notifications, text messages or calls are your typical examples of interrupt conditions. The user is doing something on your app, and through no fault of theirs (they weren’t trying to leave the app), something interrupts its operation. The question becomes "How should the app behave when the interruption is over?" (e.g., I hang up the phone).
Unfortunately, a lot of big companies do a very poor job of this and put no thought into how to deal with it. If you’re in the middle of a Candy Crush game and someone calls you on the phone, once you return to the game you'll be booted out to the main screen and have lost all your progress. The Groupon app will automatically take you back to the homepage, even if you were on a specific product details page before you being interrupted. These are just a couple of examples of bad practices for when an interruption occurs. Testers should always check what happens when interruptions occur, and they should report these issues to the business teams to ensure that a proper course of action to prevent customers from getting frustrated.
16. Crowd-source testing
Crowd-source testing is an emerging practice in the field of software testing, and particularly in mobile testing. The concept is simple, and is similar to using a focus group: When you use a focus group to test a certain user experience, you contract with a third party company that brings together a set of current or potential users to converse about your product objectively in a room. After the focus group is completed, the third-party company prepares a report. Crowdsource testing is similar in that you can contract with a third party agency. The agency typically doesn't charge anything upfront. Instead, they recruit a pool of testers, put your app in front of them, and both the agency and the testers get paid based on the bugs that they find.
There are multiple advantages to crowdsource testing. First, crowdsource companies can put your product in front of thousands of testers. That means that testing can be done faster, which helps you get your product launched sooner. It’s definitely cost efficient because you pay people based on bugs found. You can also test a wide variety of devices, device sizes, operating systems and OS versions without having to invest in the purchase of any of these additional devices for your company.
There are some key disadvantages to crowdsource testing. First, if you’re working on secret projects you may not want random testers over whom you have no control to have access to your product. Additionally, communicating with testers may prove to be difficult, even with the rise of complex online platforms for crowd testing. Lastly, because you don’t know exactly how many testers will qualify (when you have a testing screener), it may be difficult to manage the overall project plan since you don’t know how quickly they will complete the tasks. (More on the pros and cons of crowd-source testing here)
17. Security Testing
Security testing on a mobile application is a must-have these days, as the stats are scary. According to a recent Arxan “Annual State of Application Security Report (January 2016)”: (source)
- 100% of the most popular apps on iOS and Android have been hacked
- 90% of the applications has tested as part of their methodology had at least 2 critical vulnerabilities
- 50% of organizations have zero budget dedicated to security testing
We will cover security best practices in a separate article, but let’s go through the minimum security vulnerabilities you should test for. First is data flow vulnerability. Do a quick QA test on flows that include personal identifiable information and data input required from the user. Where is this data stored? Double check that the information is sent over secure channels and is encrypted at all times, and make sure that it does not get saved on the client’s side (the smartphone) at any point. Second, focus on data leakage. Make sure that data isn’t leaked through log files (been there, seen that issue multiple times!). Lastly, make sure that all web data going to and coming from the app to the server side is protected. Most robust apps use HTTPS for data encryption, and if your company follows this practice then you need to do a quick security test to ensure that all authenticated pages are served over HTTPS, including CSS, scripts and images.
This article provides at least 17 different testing strategies that can be employed in the process of testing a new feature on a mobile device. Though this is not glamorous work, having a robust and mature testing strategy is key to your application’s success. At the end of the day, the reality is that most mobile users are unforgiving. If they see a mobile app constantly taking more than 2-3 seconds to load they will abandon it and never come back. If they see that anything in a flow is broken and prevents them from proceeding with the task at hand, they will chastise your app. In the customers' eyes, you are only as good as your app's ability to allow them to complete the tasks they had in mind when they tapped on your icon. That’s why QA testers are the hidden heroes of mobile app development. They make sure that your shiny new application can actually deliver on the promise you make to your customers in a smooth and convenient manner. Follow these 17 strategies and you'll have 99.99% of your mobile app releases bug free.