Robustness Tests Checklist Example

This list of techniques is intended for software robustness testing:

  • Making the software fail to do what it should do;

  • Make it do things it should not do;

  • Demonstrate how it performs under adverse conditions.

Interface Testing

  • All interface variables are at their extreme values.

  • All interface variables are individually at their extreme values with other interface variables at normal values.

  • All values of the domain of each interface variable with other interface variables at normal values.

  • All values of all variables in combination (this will only be feasible for small interfaces).

  • Check that the boundaries in the input domain of the specification coincide with those in the program.

  • Stress the specified timings/synchronizations.

Extreme conditions

This type of test doesn’t necessarily need to pass – its purpose is to find and highlight the limits of the SW and the system.

  • System initialization should be exercised during abnormal conditions.

  • If working in a polling mode then the test object gets much more input changes per time unit as under normal conditions.

  • If working on demand then the number of demands per time unit to the test object is increased beyond normal conditions.

  • If the size of a database plays an important role then it is increased beyond normal conditions. Influential devices are tuned to their maximum speed or lowest speed, respectively.

  • For the extreme cases, all influential factors, as far as possible, are put to the boundary conditions at the same time.

  • Worst-case load analysis: Attempt to simulate the worst-case scenarios by setting maximum response time, max blocking time, max execution time, maximum memory use, etc. These parameters should represent extreme conditions, but it is important that, altogether, they correspond to realistic scenarios that are defined within specification (e.g. budget reports).

  • Attempt to provoke synchronization errors: race conditions, missed deadlines. What happens if more FDIR trips at the same time, how they are managed (FIFO, priority...)?

  • Artificially overload the system by reducing resources to check e.g. whether higher priority functions are carried out in preference of the lower ones (e.g. reduce heap/stack size, reduce clock speed, steal cycles, increase disabled time in interrupt handlers, increase network latency, …) .

  • The possible failure modes of the incoming data should be determined, especially complex, digital data strings from an external system.

  • A check should be made to ensure that protection mechanisms for exceeded frame times respond correctly.

  • Other “Interesting” combinations of inputs & events are suspected to lead to software failure.

Error injection

  • Inject too early/too late events to determine robustness.

  • Inject input values outside the specified boundaries (requirements stretching).

  • Challenge FDIR to the extreme (e.g. fault all the sensors at once; try all combinations of sensor faults, etc.).

  • Tests where the software system does not receive adequate amounts of input data or does not receive any at all.

  • Tests where the software system is fed with nonsense or corrupted data.

  • Tests for repeated commands, out-of-order commands, and missing command fields.

  • Impose operator failure and deliberate sabotage scenarios to determine robustness. Also known as “Disaster testing”.

  • Tests, where the software is forced into a state that should never occur in normal operation.

  • Provoke state transitions that are not allowed by the software requirements.

  • Tests where the internal state vector is deliberately corrupted – to verify the ability to perform failure detection, isolation, and recovery (FDIR).

  • Back-to-back Monte Carlo testing using executable models (e.g. MatLab). Potentially only for software sub-components.

  • A task to prove what happens when reference time (external source) randomly drifts.

Provoking run-time errors

  • Array index out of bounds.

  • Use a null pointer.

  • Divide by zero.

  • Logarithm of zero.

  • Tangent +/- Pi/2.

  • Arithmetic over-/underflow, especially for time-related functions, such as filters, integrators, and delays.

  • Misuse of library functions. Ensure that input/parameters to functions are properly validated.

  • For loops where the loop count is a computed value, test cases should be developed to attempt to compute out-of-range loop count values, and thus demonstrate the robustness of the loop-related code.

Last updated