Testing cycles are often under a considerable time-crunch, making it impossible for us to run all of our test cases or test all parts of the application as much as we’d like. So how do we prioritize our tests to hit the most critical parts of the application? The answer is risk-based testing.
Using a risk assessment strategy, we can approach tests based on a calculated risk to the company, data, or the users. While teaching someone a complete risk-based testing approach in a short blog post is impossible, I am going to hit on a few key points.
In the simplest terms, the risk associated with certain areas of the application can be defined as the cost of failure + the probability of failure. You have to identify the areas of the application that have the highest likelihood of failure and where those failures would cause the most damage. There are a few ways to make those identifications.
This list isn’t comprehensive since each business, application, or industry may have different focuses. But a good starting point for identifying the highest risk areas are to look for areas like the following:
- Are complex
- Recently changed
- Use a new technology
- Had a higher number of people involved in their development
- Rushed development
- Have a history of defects (defect prone)
- Frequently used
Now that you’ve identified the areas you want to focus the most testing resources on, it’s time to cut down the number test cases. As you look at the test cases, question what a failure of the test case could mean. Would a failure in the test case be catastrophic? Or would it just hinder a user? Would it be damaging? Or just annoying?
As testers, we want our application to be defect free, but that’s an impossibility. So the best we can hope for is that we find the biggest or worst defects before our users do. Using risk-based testing during time-crunched testing cycles is the best way to achieve that.