Notes - MCS
Robust Software
Notes - MCS
Robust Software
  • Robust Software
  • Secure Software Design Principles
    • Motivation
    • Secure and Resilient/Robust Software
    • Best Practices for Resilient Applications
    • Designing Applications for Security and Resilience
    • Architecture for the Web/Cloud
  • Software Security Lifecycle
    • Motivation
    • Secure Development Lifecycle
    • Software Security Touchpoints
    • Software Assurance Forum for Excellence in Code (SAFECode)
    • Secure SW Lifecycle Processes Summary
    • Adaptations of the Secure Software Lifecycle
    • Assessing the Secure Software Lifecycle
    • Recommendations
  • Software Quality Attributes
    • Motivation
    • Software Quality Assurance
    • Software Quality Standards
    • Software Quality Attributes
    • Extra Software Quality Assurance Properties
  • Security Requirements
    • Motivation
    • Security Requirements
    • Threats
    • Defenses
    • Confidentiality
    • Integrity
    • Availability
    • What about other goals/properties?
    • Security Requirements Engineering
    • Types of Security Requirements
    • Security Policy
    • Precision
    • Completeness and Consistency
    • Examples of Non-Functional Requirements
    • Goals and Requirements
    • Measures
    • Requirements Interaction
    • Natural Language Requirements
    • Best Practices
  • Common Software Attacks
    • Objectives
    • 10 Major Cyber-Attacks of 21st Century
    • Software Security Basics
    • Defenses Methods
    • SANS SWAT Checklist
  • Safe Programming
    • Secure Coding Practices
    • Top 10 Secure Coding Practices (CERT/SEI)
    • 7 Pernitious Kingdoms
  • Robustness, PenTest, Fuzzy and Static Code Analysis
    • Security/Robustness Testing
    • Robustness Tests Checklist Example
    • Penetration Testing
    • Penetration Testing Roadmap
    • Tools
    • Fuzz Testing
    • Static Code Analysis
    • Side Channels
  • Safety (and Security)
    • Safety
    • A safety Lifecycle Example
    • Risk Management Process
    • System Definition
    • Hazard Identification and Classification
    • Desk-based Hazard Identification
    • Workshop-based Hazard Identification
    • HAZOP
    • Hazard Identification and Classification
      • Broadly acceptable risks
    • Risk Evaluation and Risk Acceptance
    • Use of codes of practice
    • Use of reference system
    • Explicit risk estimation
    • Qualitative risk estimation
    • Quantitative risk estimation
    • Safety measures
    • Safety requirements
    • Hazard Management
    • Hazard life cycle
    • Independent Assessment
    • Safety Plan
    • Safety Case
    • FMEA Example
    • DevSecOps
Powered by GitBook
On this page
  • Interface Testing
  • Extreme conditions
  • Error injection
  • Provoking run-time errors
  1. Robustness, PenTest, Fuzzy and Static Code Analysis

Robustness Tests Checklist Example

This list of techniques is intended for software robustness testing:

  • Making the software fail to do what it should do;

  • Make it do things it should not do;

  • Demonstrate how it performs under adverse conditions.

Interface Testing

  • All interface variables are at their extreme values.

  • All interface variables are individually at their extreme values with other interface variables at normal values.

  • All values of the domain of each interface variable with other interface variables at normal values.

  • All values of all variables in combination (this will only be feasible for small interfaces).

  • Check that the boundaries in the input domain of the specification coincide with those in the program.

  • Stress the specified timings/synchronizations.

Extreme conditions

This type of test doesn’t necessarily need to pass – its purpose is to find and highlight the limits of the SW and the system.

  • System initialization should be exercised during abnormal conditions.

  • If working in a polling mode then the test object gets much more input changes per time unit as under normal conditions.

  • If working on demand then the number of demands per time unit to the test object is increased beyond normal conditions.

  • If the size of a database plays an important role then it is increased beyond normal conditions. Influential devices are tuned to their maximum speed or lowest speed, respectively.

  • For the extreme cases, all influential factors, as far as possible, are put to the boundary conditions at the same time.

  • Worst-case load analysis: Attempt to simulate the worst-case scenarios by setting maximum response time, max blocking time, max execution time, maximum memory use, etc. These parameters should represent extreme conditions, but it is important that, altogether, they correspond to realistic scenarios that are defined within specification (e.g. budget reports).

  • Attempt to provoke synchronization errors: race conditions, missed deadlines. What happens if more FDIR trips at the same time, how they are managed (FIFO, priority...)?

  • Artificially overload the system by reducing resources to check e.g. whether higher priority functions are carried out in preference of the lower ones (e.g. reduce heap/stack size, reduce clock speed, steal cycles, increase disabled time in interrupt handlers, increase network latency, …) .

  • The possible failure modes of the incoming data should be determined, especially complex, digital data strings from an external system.

  • A check should be made to ensure that protection mechanisms for exceeded frame times respond correctly.

  • Other “Interesting” combinations of inputs & events are suspected to lead to software failure.

Error injection

  • Inject too early/too late events to determine robustness.

  • Inject input values outside the specified boundaries (requirements stretching).

  • Challenge FDIR to the extreme (e.g. fault all the sensors at once; try all combinations of sensor faults, etc.).

  • Tests where the software system does not receive adequate amounts of input data or does not receive any at all.

  • Tests where the software system is fed with nonsense or corrupted data.

  • Tests for repeated commands, out-of-order commands, and missing command fields.

  • Impose operator failure and deliberate sabotage scenarios to determine robustness. Also known as “Disaster testing”.

  • Tests, where the software is forced into a state that should never occur in normal operation.

  • Provoke state transitions that are not allowed by the software requirements.

  • Tests where the internal state vector is deliberately corrupted – to verify the ability to perform failure detection, isolation, and recovery (FDIR).

  • Back-to-back Monte Carlo testing using executable models (e.g. MatLab). Potentially only for software sub-components.

  • A task to prove what happens when reference time (external source) randomly drifts.

Provoking run-time errors

  • Array index out of bounds.

  • Use a null pointer.

  • Divide by zero.

  • Logarithm of zero.

  • Tangent +/- Pi/2.

  • Arithmetic over-/underflow, especially for time-related functions, such as filters, integrators, and delays.

  • Misuse of library functions. Ensure that input/parameters to functions are properly validated.

  • For loops where the loop count is a computed value, test cases should be developed to attempt to compute out-of-range loop count values, and thus demonstrate the robustness of the loop-related code.

Last updated 1 year ago