Friday, April 16, 2010

Time to get POSITIVE about NEGATIVE

I read an interesting piece of information from Mr. Paul Henderson, VP Product Marketing for Device Test. The summary is :

There is marginal test automation is in place even in an advanced industry like Semiconductors and Embedded Electronics. And still, quiet little companies are doing in the way of “negative” testing. Negative testing means the testing tries to break the system, validate fault and exception handlers or otherwise force the device in to an unusual state or “edge condition”.

Usually companies base their quality assurance strategy on ‘positive’ testing of functional requirements. The test group takes the user specification and exercises the device from a black box perspective to see if all the features do what they were specified to do. Leading teams actually map tests to requirements to show traceability of requirements to tested features and results.

This is good and important, but still the defects customers are seeing in the field relate to unusual conditions that the device was not tested to handle.

For example, unusual combinations of data inputs, overflows from long running operation, performance bottlenecks triggered by an unusual combination of events, or unrecoverable system conditions caused by hardware failures or other untested fault conditions.

Subsiquent satisficing “The failure occurred because customers were using the product in a way it was not designed for.” For whatever reason, the fact is that these kinds of negative tests are not flowing from traditional design requirements and therefore are not part of the resulting test suites.

Testing a product against its design requirements is of course the starting point for any QA process. But development and QA teams need to start thinking a lot more about how to anticipate and test for failure modes and other conditions that devices may encounter in the real world. Increasingly devices are no longer single function and are part of an ecosystem of interoperable devices. Therefore the potential modes of operation and therefore modes of failure are much greater.

This is somewhat of a mindset change, and it also will require increasing levels of automation to implement. There is also an opportunity for more of a ‘design-for-testability’ approach that starts up front in the development process. Product requirements should specify anticipated failure modes and should contain requirements something to the effect that ‘the device will have a fault handler to cover the following conditions and these shall be tested as specified…’.

Software developers then need to work closer with test developers to assure that testers understand potential failure modes and that they provide appropriate ‘hooks’ to allow automated tests to drive the software or device into failure conditions during testing.

Separately, quality assurance professionals need to take their own harder look at real world conditions and potential failure modes (including ones that were not necessarily anticipated up front by designers). QA teams can provide an independent view to device operation and deployment that can project where additional failures could occur from operator error, component failure, or complex deployment configurations.

Many of these failure modes have been time-consuming and expensive to test using traditional manual methods, but with new software-based test automation tools, this comprehensive negative testing can now become practical. New test automation platforms can dramatically expand the range and depth of negative testing. This can be accomplished on device software while still meeting product delivery schedules. Higher delivered quality will result.


Bottom line, more negative testing will have a higher probability to lead to more positive outcomes for any maker of complex embedded devices. It’s time to get started.

No comments: