Beginners Tips for Kickstart Software Testing
Software testing is a highly specialized, quickly evolving field. New packages and procedures are being introduced every day, and automated testing methods are increasingly replacing manual methods, significantly speeding up release times while ensuring high-quality code. Navigating through this fast-paced, demanding field can be difficult, especially for junior testers and trainees.
Many inexperienced testers mistakenly believe that their central objective is to find bugs in the application. They tend to focus on crashes within the system, testing through unrealistic scenarios while neglecting to prioritize and carry out essential procedures. Other trainees are simply unaware of all the methods and techniques they will need to employ to ensure the product functions flawlessly while fulfilling all the client’s requirements.
To provide junior testers and trainees with a solid basis they can build on, this article presents an overview of testing types, testing techniques, and defect tracking procedures. Having a strong understanding of these concepts, and how to prioritize them, will help junior testers and their teams successfully deliver high-quality software.
Prioritizing Test Procedures
Deciding which tests and design techniques are needed for a specific application is a difficult but necessary first step. As detailed in the following sections, many different procedures can be implemented, depending on the application’s requirements. Testing every combination of parameters is not possible, so you must prioritize your tasks.
As a general rule, you should concentrate on testing aspects of the application that have the most risk in terms of functionality, user cases, and flows. You will also have to adjust your priorities as the project moves forward, and changes and improvements continuously occur. Furthermore, testing rounds are usually scheduled to take place within a certain time frame, so you must make sure the necessary procedures can be fully carried out, given these constraints.
Areas designated as high risk will vary greatly from application to application, and often you will have to rely on common sense when scheduling your tasks. For example, newly added features will obviously have a higher risk of containing bugs than features already in production, so you will need to monitor such changes. Applications that impact human health and/or involve the transference of monetary assets should also be designated as high priority and continuously tested.
Efficiently prioritizing tasks is a skill you will gain through experience, but no matter the circumstance, your testing procedure must be based on level of risk.
Testing Types and Classifications
Junior testers must possess a thorough understanding of all the options available to them. Having a solid grasp of the various types of tests, as detailed in this section, is the basis on which to build experience and knowledge. As you will see, each testing type has its own set of sub-categories that must also be understood.
Reviewing testing program code
- Black Box entails no code review. All testing is done through UI.
- White Box entails code review and analysis.
- Gray Box is a combination of Black and White Box testing.
- Functional testing focuses on the system’s individual functions.
- Usability testing determines if the application is user-friendly. Its goal is to find ways to make the product more attractive to users.
- Load testing is focused on the application’s behavior under specific loads.
- Performance testing determines the performance of the software product.
- Stress testing verifies that the application works acceptably in certain stress environments.
- Security testing insures the application’s confidential data is protected from hackers, viruses, and other threats.
- User Interface testing verifies that the application’s graphical interface meets requirements.
- Compatibility testing verifies that the product operates correctly within a particular environment.
- Localization testing verifies the accurate translation of user interface elements, including translation system messages.
- Installation testing verifies that the software successfully installs and configures, and that it can be upgraded and/or uninstalled.
- Manual testing is performed by test engineers.
- Automated testing is performed by program scripts created by developers or engineers specialized in automated testing.
- Semi-Automated testing is performed by program scripts and test engineers.
Working with a running application
- Static testing focuses on reviewing and/or analyzing the product requirements without executing the software code.
- Dynamic testing checks the product requirements while the software application is running.
- Component testing checks the software’s individual components or functions.
- Integration testing is focused on the interfaces and the interaction between integrated components or systems.
- System testing checks the process a whole.
- Positive tests verify that the feeding program’s input parameters correspond to the requirements and that they function properly.
- Negative tests verify that the program will work properly in cases of unexpected inputs.
- Alpha testing is generally performed by potential users or an independent test team on the developers’ side, but outside the developers’ organization. This type of testing can also be simulated.
- Beta testing is performed by customers independent of the development team to determine whether or not a component or system actually meets the customers’ needs and corresponds to the related business processes.
- Regression testing “retests” a program after modifications have been made to ensure that they haven’t resulted in errors and/or haven’t activated areas not subject to change.
- Acceptance testing determines whether or not the system complies with the predefined needs, requirements, and business processes. It ultimately enables users, customers, and/or other authorized personnel to accept the system or ask for changes.
- Smoke testing consists of running sample sub-groups of tests derived from the total number of planned tests that cover the basic functionality of a component or system.
- Documentation or Formal testing is conducted by running predetermined scenarios.
- Exploratory testing is somewhat informal and consists of the tester actively controlling the design of the tests as he/she performs them.
In real-life situations, software test engineers do not necessarily perform all these tests, and some types (performance, security, usability) will be outsourced to other companies. However, engineers will consistently perform basic tests using as many different methods as possible. The outcomes of these basic tests will determine what future tests will be required for a specific application.
Test Design Techniques
Exploring various design techniques is an important part of software testing. In general, the application’s requirements determine what strategies you will need to employ.
Creating test cases from requirement specifications or Black Box design
- Equivalence Partitioning requires dividing the input data of specified software units into partitions of the same data and then creating tests for these partitions. Each test case should cover each partition at least once.
- Boundary Value analysis tests the boundary values of accepted input data.
- Decision Tabling allows for the creation of test cases by analyzing the product’s requirements. This technique also helps the tester structure and prioritize these requirements.
- State Transition testing requires the system’s states to first be defined, perhaps in a graph. Tests are then created to check that the system transitions smoothly from one state to the next.
- User Case testing simulates the interaction between the user or “actor” and the system through the creation of end-to-end scenarios.
- White Box testing techniques cover specified parts of the system. Examples include statement coverage, branch coverage, and patch coverage.
- LCSAJ testing, or Linear Code Sequence and Jump testing, is a white box technique focused on the code coverage at the beginning and end of the program or branch.
Deriving test cases based on the tester’s intuition and/or experience
- Error guessing is entirely based on the tester’s experience and personal judgment. Good testers consider conditions in which the software may experience the highest chances of errors. This technique is not limited by any rules or procedures.
- Exploratory testing is about investigation, discovery, and learning. As a kind of “thinking” activity, testers do not generally create cases in advance, although they may note ideas or areas to check before they actually execute any tests. Similar to error guessing, the tester is free to determine how and what to test.
When using the testing techniques described in this section, you will find many different bugs in the system, which will lead you to question if the product’s requirements can realistically be met. As a general rule, you should document all your concerns throughout the entire process.
Defining and Tracking Defects
The next step you will take as a QA engineer is to correctly track all the bugs and errors you discovered in the previous phase. You will often find that the project you are working on already has a bug- or task-tracking system in place for this purpose. Some such systems are Jira, Pivotal, Mantis, Quality Center, and TFS Bug Tracking. If your project does not have this capability, you can use Google Sheets if security policies allow.
As a testing engineer, you must provide the software development team with the information they need to correct defects. You will be required to accurately and concisely describe the steps to reproduce errors. Screenshots and/or video can help clarify the information, along with detailed error logs. You should add links to the page that contains the defect if possible. You will also need to note the environment in which the defect appears (i.e. in the operation system, web browser, mobile device, etc.). Lastly, you should try to reproduce the error several times and in different environments before you start documenting it. At the same time, you will change the input data to localize bugs.
In most cases, your bug report will include the following:
- Bug name or type
- Environment in which bug was reproduced
- Steps you took to reproduce the defect and the results
- Expected results
- Severity of the defect (to what extent it can affect the software)
- Priority or order in which defects should be resolved (The project manager or owner, rather than the QA tester, usually performs this task.)
- Various attachments (screenshots, video, and/or audio files meant to help developers reproduce and localize defects)
At this point, you may wonder if an issue needs to be reported if it is not reproducible in every test run. You should report the issue unless you are sure it is not related to some cached data or the wrong test environment. If you find that the input data does not correspond to the production environment, you would not report the error. In all other cases, you would report the error to the team. Developers might find the cause of a bug by closely checking the code. You or another QA engineer may also be able to determine the steps to reproduce the error through test runs. Whatever the outcome, the team needs to know they have a potential issue and plan releases accordingly.
Summary and Recommendations
Junior QA engineers must have a solid grasp of all the testing options available to them. The different types, techniques, and methodologies that I have described are all equally important. You will discover that your job goes far beyond simply catching and documenting bugs. You will play a very important role in the production of high quality software that meets all your client’s requirements.
As you begin your career as a software testing engineer, keep the following recommendations in mind:
- Be familiar with all testing types and classifications.
- Prioritize test procedures according to risk. Be prepared to adjust procedures as the project progresses.
- Perform basic tests using as many different methods as possible, given the time and budgetary constraints.
- Meticulously track all bugs and errors and report this information concisely and thoroughly.