Wouldn’t it be nice to have 100% coverage of a system with automated tests? Imagine it. Not having to do any manual tests and every time something is updated, get a full report of the whole system. Everything would be perfect, no bugs would be missed. Right?
Sadly, no. This is impossible, that’s a complete utopia. There’s no way to account for each and every single one of the actions that a human can do in our apps. Why? Because human beings are unpredictable.
This is even more noticeable when we’re testing the UI with automated frameworks such as Selenium, Appium and other front-end solutions for automated testing.
Even if the intention of trying to automate everything is noble and positive, sometimes that can lead us into writing impossibly complex code that will be hard to maintain, fragile, and not cost-effective. To test unpredictable scenarios, we should use unpredictable methods (humans).
Where the automation comes handy? With things like Regression Tests, Smoke Tests, and Performance Tests. The type of testing that always follows the same path and tends to be repetitive. Having that QA effort automated allows the human testers to focus on the fun stuff, which also keeps their concentration up and makes them more productive.
How many manual tests vs. automated test you have are up to you. That depends a lot on the type of system that’s being tested. There are systems that allow for bigger automated coverage than others. Sometimes if something seems too hard to automate, it’s because it shouldn’t be automated at all.
When our goal is to have a 100% coverage of automated tests in our system, we’re very likely to run into these problems:
So, does that mean that we should focus only on covering a small portion of our system? Absolutely no. Not focusing on 100% coverage doesn’t mean that we should just cover the basics. What it means is that we should focus on adding tests that add value to our solution.
There’s no golden rule for this, most of it falls into common sense. But usually, the best candidates for automation are:
We should never expect to test new features with our automation. The first time a new feature is made is preferable to have actual eyes looking at it and testing it from as many sides as possible. Automated Tests are good for validating if something is still working as expected despite the changes made to the app. Not the other way around.
It’s a lot easier and more productive to write an automated test for a system that’s stable at the moment. A new feature cannot guarantee this. In an ideal world, a new feature is tested and documented by a human, and then, when that feature is tested as part of the regression tests it is automated based on the findings of the manual tests.
In conclusion, keep it simple, keep it tidy, and keep it meaningful.
This article was written by Mike Arias
Senior Software Testing Engineer of TechAID.
Twitter: @theqaboy
Blog: qaboy.com/
SHARE THIS POST
Email: howdy@techaid.co
Phone: +1 (888) 280-0575
Copyright © 2020 TechAID Solutions, All rights reserved.
We won't bombard you with so many emails!
We won't bombard you with so many emails!