Verification and Validation are an integral part of any Software Development process. Numerous definitions of these 2 processes are available, the more famous of which is, Validation is to check if you built the right product and Verification is to check if you built the product right. The importance of these 2 processes in any SDLC is immense and critical for the successful execution of the project. It won’t be an understatement to say that the success of any product is directly linked to the role and quality of these 2 processes in the system.
Test-cases are an integral part Verification and Validation processes and are the basic construct of the entire validation system. Typically one hears that some 1000s of test-cases have been executed and sometimes, the numbers are pure mind-boggling. However, in the same breath, it is often heard that the customer has reported a lot of issues/bugs and is generally unhappy. To understand the potential reasons for the customer dissatisfaction, one needs to take a closer look at the process and determine if there is something that is fundamentally short/flawed.
1. Quantity vs Quality
Running 1000s of test-cases is necessary but not a sufficient condition to guarantee the quality of any software product. Though this appears to be counter-intuitive i.e. larger number of testcases should make the product better, one has to note that number of testcases of the dimensions of a multi-dimensional vector which embodies the test construct.
Generally, a test-case should be designed for a specific goal. Some organizations do insist on documenting the expected result of a testcase, which helps the designer to capture the potential gaps in the test. When 10 testcases test the same piece of code i.e. same exact lines of code, it is fundamentally flawed or skewed as the left out lines of code are never tested. However, if test-cases are designed with expected results in mind, one can clearly design 5 test cases where majority of the code is traversed. Code Coverage is a typical metric that is widely employed to capture this aspect. Code coverage is a cumulative percentage figure that captures the total lines of code that were touched during testing as compared to the overall available lines of code.
Higher code coverage (as a percentage) is order of the day. Typically, for a product to be field-grade, code coverage has to be in very high nineties, say greater than 98% or so. This ensures that a large part of the code is tested and verified before customer deployment and probability of customer reported issues is less. In a nutshell, 100 test-cases with a combined code-coverage of 80% is not as good as 6 test-cases with a combined code-coverage of 98%.
2. Is Code Coverage sufficient?
Code-coverage again is a good tool, but doesn’t guarantee that the software product actually meets the necessary requirements and caters to different conditions that it would be subjected to. Knowledge of the domain or the product being built is also essential in developing a good test-bed.
If the software product under consideration is a multimedia encoder, code coverage will ensure most of the algorithm paths, it will not necessarily ensure that the built product adheres to the design considerations. An encoder is expected to handle variations across multiple dimensions of the input and generate good quality bitstreams. To achieve this goal, a good test-bed which tests all the dimensions of input vector should be designed.
3. Test Suites
In any typical embedded systems, different suite of tests would be required to validate different aspects of the system. Hence, apart from the overall functional suite, one would employ Performance, Negative and Stress testing suites. One common trap is to combine one or more suites together to optimize time and effort.
If there is a combined test-case that tests both functional and performance aspects of the system, the failure of which can’t be easily localized to either functional or performance aspect of the same. This requires additional debugging effort to understand the nature of the failure and derive conclusions of the same. Hence, the rationale of combining suites is not beneficial from an overall test strategy perspective.
In the same vein, a good test design should encompass system and integration level testing as these are traditionally known to be the pain points. When modules from different developers are integrated, assumptions come to the fore and hell breaks loose. Hence, it’s prudent to forecast the potential pitfalls and design specific tests to cater to these integrations, specifically to interfaces. If a suite of integration test pass, a huge risk would be mitigated from the project perspective.
4. Customer Tests
Customer is the king!! If the software product has been released to the customer, it is always advisable to design a test suite that tests the system in the exact manner as the customer. This is a very essential and critical aspect of the overall validation strategy, which gets missed during tight schedules. Unless the customers test setup is replicated, there always would be assumptions and potential dangers in future. Forecasting this early enough and designing a suite should be the mantra of every developer.
Summarizing the different aspects described above, one can arrive the following thumb-rules for effective test-case design:
- Code coverage is an important aspect which should be factored into test-case design. Try to achieve close to 100% code coverage
- Domain specific knowledge is essential to augment the code coverage which not only validates the product, but also ensures that the same works in accordance with the expected design.
- Horses for courses should be the mantra in design. Different testcases should be designed for different scenarios and it is best to avoid combining more than 1 objective into a single testcase
- Customer’s test-cases should be included into the overall test scenario. It reduces a lot of effort and pressure from a long-term perspective.