Testing - 1, 2, 3

Traditional software testing goes through progressive application release stages - developer testing, alpha testing, and customer release - with spikes in the bug rate each time as different users exercise different portions of the application. When the rate of new bug reports and the number of open bugs decline to an acceptable level, the application is deemed ready for the next release stage.

So how do you determine that the bug rate has declined enough? The ideal release standard would be zero known bugs, but to some degree it depends on your testing goals and your customers' willingness to accept early bugs. If your product helps customers solve a problem that they otherwise could not solve, they might accept occasional bugs. If there are competing products on the market already, bug tolerance might be very low.

Proactive testing strategies (see Software Design for Testability) are a better way to ensure that bugs are found and fixed early. Directed black-box and white-box tests written at every level of the code ensure that every line of code is executed at least once. After all, if the code hasn't been executed in your own testing, you can't say that it works.

There are several levels of proactive testing, described in detail in Levels of Software Testing. Briefly, they are:

  • Basic Black Box Testing - typical valid and erroneous inputs for each feature in the specification
  • Basic White Box Testing - typical valid and erroneous inputs for each publicly accessible function in every application layer
  • All-statements White Box Testing - examples that cause each line of code to be executed in every function
  • All-conditions White Box Testing - examples that cause each condition in each selection statement to be activated

Each successive level includes all of the tests in the previous levels. My testing standard is all-conditions white box testing, because it it has a well-defined completion point and verifies that the code works as written. This takes time - about as much time in testing as in development - and your management may not be willing to delay a release until you are confident that all of the bugs have been found and fixed.

What To Do If Your Test Budget Is Restricted

If your test budget (time or money) doesn't allow all-conditions white box testing, what can you do? Simply work your way down the list above. Start with black box tests, of course. Try at least to ensure that you can do basic white-box testing, with standalone test programs that can invoke each function separately. Once you have standalone test programs written, you can add to them at any time.

In particular, you can add a white box test for every bug that comes in. This is pretty straightforward if you can invoke any given function; you just need to copy the context of a bug (data structures and parameters) from the debugger session into your test program, then run the test program to ensure the bug is triggered again. Now you can analyze the bug without single-stepping through the entire application. Keeping the test in the standalone test driver ensures that the bug stays fixed.

You're going to spend significant amounts of time testing your code. Limiting your up-front test budget simply pushes some of the testing after the release date. Bug analysis is just another form of testing. You can either delay the release date to perform proper testing, or you can delay your customers while they wait for fixes to the released product. It's your call.

If you can't spend the time to do all-conditions white box time before releasing the software, I recommend that you set a standard for fixing bugs: every routine that has a bug gets all-conditions white box testing before the bug is closed. If a routine has one bug, it's a safe bet that it has more. You want to test it thoroughly to ensure you don't have to come back to it again.  Doing this means you approach the goal of all-conditions white box testing over time, as bugs come in.

When Will You Have "Enough" Tests?

I try to test my code using all-conditions white box testing. Although initial release dates are delayed a bit, I believe that it saves me time in the long run. The few times I've strayed from this test strategy in my own work, I've regretted it. Optimization software demands fully functioning low-level code; without it, you don't know whether an optimization strategy has failed due to a bug in low-level code or whether the strategy itself needs improving. Worse yet, bugs have to be fixed in a precise order or they may not be reproducible.

The nice thing about all-conditions white box testing is that you know when you are done: every line of code is executed, and every unique branch condition is triggered. Once this has been completed, additional white box tests are pointless. You should already have some black box tests to verify that the code's specification is met, but if not you can add a few now. Then it is time to move on, with confidence in what you've done.

If your budget doesn't allow for this, add tests in the following order:

  1. Define some typical examples - cases that customers are likely to use. These are your "smoke tests." They can also form the basis of a black box test suite.
  2. Add some stress tests: inputs that are at the boundaries of validity (both in bounds and out of bounds), large data sets, and very small data sets.
  3. Put in examples of rare occurrences that your code must deal with. You know that your code must deal with them because there will be code for them. These of course are the first white box tests.
  4. Define standalone test driver programs.
  5. Add tests to get full statement coverage (mid-level white box testing).
  6. Add tests to get all-conditions coverage.

Rare occurrences may seem like an unusual place to focus on testing; after all, these situations are rare and customers might not ever see them. But the code required to deal with rare occurrences tends to be complex, so it is more likely to have bugs. Your goal here is to find and fix as many bugs as possible.

Focus a little more attention on lower-level code. Not only is lower-level code easier to test with standalone drivers, but you'll save huge amounts of time isolating deeply buried bugs (see The Cost of Debugging Software). Trustworthy lower-level code is well worth the effort.

Add a test in your standalone test driver for each bug, even if the routine with the bug does not yet have directed tests in the driver. You can always add more tests.

When you do find a bug in a routine, look for variants of that same routine. Lots of code is copied and adapted in ways that make it hard to generalize. You either have a common routine that is very complex (and thus prone to bugs) or several related routines that each do one thing well (and thus may share a bug).

Conclusions

Sometimes you just don't have the time or money to do full testing before your application is released. A careful test strategy can get you reasonable product quality early on, with the ability to raise testing (and product) quality to the highest levels as time goes by.

Contents of this Web site Copyright © 2011-2020 by David C. Chapman. All Rights Reserved. Contact me

Joomla Templates: by JoomlaShack