Some debates are not easily resolved. Kanban or Scrum? Vim or Emacs? In the world of software, many technical engineering debates are created but never settled. True, most engineering debates can be emotional or reductionist; however, they often have little to do with actual engineering benefits, but everything to do with opinion. So it raises the question: Which testing method is the best?
One thing we can all agree on is that developers of monolith applications (of varying skill levels) have all at some point updated and patched programs, introduced complex logic, which has caused performance inefficiencies, dead code, and piles of technical debt. It makes things that much harder to change or work with, it creates inefficiencies, and naturally, reworking costs more time.
The solution? To answer that, we first have to consider the problem. Technical debt causes velocity decrease and stops a team from working. It accrues the same way that people accumulate financial debt. It incurs ‘interest payments’, which materialize as all that extra effort we have to do in future development because of our ‘rush-to-meet-deadline’ design choices.
The main component of any successful software-based project is undoubtedly testing. The process you choose depends on varying requirements for the project, how quickly it needs to be completed, how much you have at your disposal and most importantly if it is the right fit for your project. It is fair to say that whichever route you take, there are pros and cons for both.
For example, manual testing does not guarantee accuracy at all times due to human error; hence it could be seen as less reliable. Automated testing has many advantages such as code cleanliness. It can evaluate syntax. It can validate comments to ensure they are actually inserted in the code. It can help you to implement good development and coding practices into your code base.
However, like anything, its benefits are dependent based on its use. What testing scenarios are best suited to manual testing?
Fact-Finding Testing: This type of testing calls upon a testers experience and intuition. Sometimes human skills are crucial to being able to conduct the testing process in this scenario.
User-Friendly Testing: Simply put, this is where you need to measure how user-friendly, either software or product is for the end user. With this particular scenario, observation from a human perspective is the most critical factor.
Ad-hoc Testing: In this scenario, no planned method of testing is required, but the actual insight and understanding of the person testing is the only important factor.
Automated testing will require a large-scale capital investment to bring the right suite into your organization, and will require strategy development, staging, training and on-going maintenance of the new packages. With that in mind (as well as the glaringly obvious software cost), is it even worth cleaning up your manual test technical debt?
Unfortunately, product owners and stakeholders do not see technical debt as a priority. So that is the first challenge, training the board and executives to properly value technical debt just as much as they value new features.
One obvious benefit to board members and executives is the Return on Investment. When you are planning to invest in a suite of automated testing tools, you need to figure out how these tools will offer you compensation regarding ROI.
Automated testing uses software tools to run detailed, repetitive, and data-intensive tests automatically. It helps teams improve software quality, and makes the most of their always-limited testing resources. It can help teams test faster, allows them to test substantially more code, enhances accuracy, and frees up engineers to focus on tests that require manual attention and their unique human skills.
The cost of manual testing incorporates the time, loss of hours and the efforts of the tester\QA managers. If you have automated testing tools, you have the added ability to test faster, quickly, efficiently, accurately and would be delivering bug-free application within the delivery time frame.
Automated testing is favored typically in the following scenarios:
Regression Testing: Ideally suited for automation because of frequent changes in code, and the need to run regressions correctly and quickly.
Repetitive Testing: Repetitive task execution is best automated.
Performance Testing: Simulated testing for thousands of concurrent users requires definite automation.
Automated testing is essential when you have complex scenarios that involve a lot of data, for example, an airline client who uses software to ensure that their web booking system works correctly. Being able to support nested loops in test cases solves a unique problem of being able to validate data for multiple origins and destination points. Automated testing assists your client to deliver a quality product.
Automated testing is more about testing already built code, or code that has already been checked, as opposed to code that is in flight. Historically, you could not put the responsibility back on the developer, but testing is now putting more of the testing responsibility on the developers themselves, as opposed to the separate and distinct team of QA or QE.
Autonomous testing is a great way to accomplish the majority of your testing requirements and gives a good case for the effective use of time and resources. You should still think carefully about choosing the automation tool, however. Make sure that you have staff that is correctly skilled in integrating into your testing work. If you don’t, your tool will stay on the shelf collecting dust, and worst of all, delivering you no ROI.
If you decide to go down the autonomous route, make sure the tool is the best tool available for your requirements, but be aware that you are unlikely to find one that is a complete match, so find out the limitations and perhaps consider utilizing manual testing techniques to combat testing tool limitations. Open source tool is also a good automation foundation which could save you a lot of money.
The best way to settle the debate is by using reason. You cannot entirely rely on either manual testing nor automated testing, so why not adopt a combination approach?