Risk-based Testing


First of all a confession

This has been one of those ‘Lets Dump It On The Testers’ weeks. You know the kind of thing, someone else has stuffed up, the project is running late, they didn’t do what you advised them, but now somehow it all up to you to work the extra hours and make sure the deadline is not missed! That has resulted in me being away from home for a week, working long hours and coming and going from the  hotel in the hours of darkness! So, this week’s blog is not only late, it’s also a republish of an article that appeared in TEST magazine some time ago.      

 

Risk-based Testing

Goldilocks and the Two Bears! 

Let’s face the truth, we are testers, we test, that’s what we do. We love it. A good day for us is when everything development delivers to us is sent back with a host of Sev 1’s. Oh yes, we pretend we are concerned about ensuring quality, protecting the organisation and adding value, but really we just love to find fault. The trouble is, those mean tight fisted bean counters won’t give us unlimited budget, the pin striped suites in Corporate keep reminding us we have contractual dates to meet and those frilly young things in marketing have gone and told the world that ‘WonderWidget’ will be demonstrated at ‘such and such geek show’. 

You know the story of Goldilocks and the Three bears, well guess what? In testing we never get to the third bowl, its never just right. Like it or not, we are going to have to accept that we only have limited budget and limited time to test this application.

Given the limitations above, a risk-based approach to testing can help ensure that we get the biggest bang for our buck. Risk-based testing is not new or difficult, it’s something we do all the time when writing tests, but often don’t realise. It’s a case of asking, “Where is the software most likely to fail, and what will be the consequence if it does?”

Time is a Requirement too. 

One of the big challenges for testers who adopt a risk-based approach is to accept that ‘Time To Market’ is a requirement too. In fact it may be the most important requirement for a particular project. If we have to meet a deadline, it’s no good saying we still have 100 tests to run, we have to meet a deadline. We have to be able to say, “We have 100 tests left to run, however the risk associated with those 100 test is XXXXX, therefore if you release now the possible consequences are, YYYYYY”. We have to have to be able to say, at any given stage, “These are the areas we identified as High, Medium, Low risk, these are the tests we have run, these are the results, and based on these it suggests that, blah blah blah.” 

Risk Factor  

The key to running a successful risk-based test cycle is to correctly identify, asses and address the risks associated with the software.  The process follows five stages;

1 Risk Identification

   Understand the risks by meeting with the relevant ‘experts’ from both the technical and the business communities and prepare a register of risks. Workshops, document reviews, past projects data can all be used to feed into this activity.

2 Risk Analysis

Not all risks are equal, to ensure that testing addresses the high exposure risks first, relevant experts and stakeholders need to meet to discuss and analyse the risks. Each risk is assigned a probability and an impact, this activity involves all parties, as sometimes, for example, IT staff may not realise that a LOW technical risk, might represent a HIGH business risk. For each risk identified ask the question, “How likely is this to happen?”, and assign the risk the appropriate score based in the agreed answer, i.e.

·         Almost certain to happen, highly likely = 5

·         This is probably going to happen, we think its likely = 4

·         Not sure this will happen, we think this is 50/50 = 3

·         This probably won’t happen,  unlikely = 2

·         The chances of this happening are a million to one, this is very unlikely = 1

Again, for each risk ask the question, “If this happens, what will be the impact on the business?” and assign the risk the appropriate score, i.e.

·         It would be critical, the business objective could not be achieved, we would suffer immense loss = 5

·         It would be severe, the business objective would be undermined, we would suffer significant loss = 4

·         It would be moderate, the business objective would be affected, we would suffer some loss = 3

·         It would be low, the business objective could still be achieved, we would suffer minor loss = 2

·         It would be negligible, there would be no real impact on the business objective, we would suffer no loss = 1

Having given each risk a Probability and an Impact score out of five, the exposure that each risk represents can be clearly seen and agreed.

Probability + Impact = Exposure, e.g..

Risk Probability   Impact   Exposure
System won’t cope with number of users 3 + 4 = 7
Converting Name field from 15 characters to 25 may cause data loss 1 + 5 = 6
Transmitting confidential data between 16 different systems may cause data loss 5 + 5 = 10

 

3 Risk Response

Agree the appropriate response to each risk. It may be that not all risks require a test written, there might be another mitigating action that is more appropriate.

Formulate test criteria for each risk with clear objectives and pass/fail conditions. To help with this ask the following questions;

·         What would cause this risk to materialise, what data, action, circumstance or event would need to be in play?

·         In which area(s) of the system is this most likely to occur?

·         Who would be the most likely to experience the consequence of this risk, and when and why?

·         What types of testing, techniques, or tools would be best to expose this risk?

Document the requirements for each test, such as data, environments, time scales, business resource.

4 Test Scope Definition

Agree the scope for the testing, which risks are in and which are out of scope, what the test schedule will be, who is responsible for what, and what the minimum success criteria is for each set of testing.

It is very helpful to have agreed both, generic quality or test objectives for the system; (e.g. Demonstrate that contractual requirements have been met.) as well as, specific risk based quality or test objectives, (e.g. Demonstrate secure data transfer between system A and system B.)

Produce a Test Coverage matrix, based not on functional requirements, but on identified risks.

Agree what reports will be produced, and what information should be included. Agree who these reports should go to and how often. The main purpose of risk based test reporting is to be able to show which of the identified risks have been tested and which have not yet been addressed. It should provide the reader with the answer to the following question;

“If I release now, what is the danger of any of the identified risks becoming an actual issue?”

4 Test Phase

Testing will now follow a traditional test cycle, (preparation, execution, reporting) however the focus is now not on testing each individual function, but on running the tests that exercise the areas of risk identified, of course concentrating first on the high exposure risks.

Give it a go

I believe that risk based testing can offer significant return on investment for many organisations, directing the test effort to where the pain is most likely to be felt. I believe it can empower the business decision makers, giving them the information they really need. It tells them not,  how many tests have been run, and how many bugs have been found,  but what are the likely consequences to the  business if the code is released in its present state?

I would encourage anyone that finds themselves either with too little budget or two little time to do all the testing they want to do to consider piloting a risk-based test strategy to discover if it can benefit their organisation.

Tony Simms is the Principal Consultant at Roque Consulting (www.roque.co.uk) and is available to run training on risk based testing or to facilitate risk identification and response workshops. He can be contacted via email at tony.simms@roque.co.uk