Notes - MCS
Robust Software
Notes - MCS
Robust Software
  • Robust Software
  • Secure Software Design Principles
    • Motivation
    • Secure and Resilient/Robust Software
    • Best Practices for Resilient Applications
    • Designing Applications for Security and Resilience
    • Architecture for the Web/Cloud
  • Software Security Lifecycle
    • Motivation
    • Secure Development Lifecycle
    • Software Security Touchpoints
    • Software Assurance Forum for Excellence in Code (SAFECode)
    • Secure SW Lifecycle Processes Summary
    • Adaptations of the Secure Software Lifecycle
    • Assessing the Secure Software Lifecycle
    • Recommendations
  • Software Quality Attributes
    • Motivation
    • Software Quality Assurance
    • Software Quality Standards
    • Software Quality Attributes
    • Extra Software Quality Assurance Properties
  • Security Requirements
    • Motivation
    • Security Requirements
    • Threats
    • Defenses
    • Confidentiality
    • Integrity
    • Availability
    • What about other goals/properties?
    • Security Requirements Engineering
    • Types of Security Requirements
    • Security Policy
    • Precision
    • Completeness and Consistency
    • Examples of Non-Functional Requirements
    • Goals and Requirements
    • Measures
    • Requirements Interaction
    • Natural Language Requirements
    • Best Practices
  • Common Software Attacks
    • Objectives
    • 10 Major Cyber-Attacks of 21st Century
    • Software Security Basics
    • Defenses Methods
    • SANS SWAT Checklist
  • Safe Programming
    • Secure Coding Practices
    • Top 10 Secure Coding Practices (CERT/SEI)
    • 7 Pernitious Kingdoms
  • Robustness, PenTest, Fuzzy and Static Code Analysis
    • Security/Robustness Testing
    • Robustness Tests Checklist Example
    • Penetration Testing
    • Penetration Testing Roadmap
    • Tools
    • Fuzz Testing
    • Static Code Analysis
    • Side Channels
  • Safety (and Security)
    • Safety
    • A safety Lifecycle Example
    • Risk Management Process
    • System Definition
    • Hazard Identification and Classification
    • Desk-based Hazard Identification
    • Workshop-based Hazard Identification
    • HAZOP
    • Hazard Identification and Classification
      • Broadly acceptable risks
    • Risk Evaluation and Risk Acceptance
    • Use of codes of practice
    • Use of reference system
    • Explicit risk estimation
    • Qualitative risk estimation
    • Quantitative risk estimation
    • Safety measures
    • Safety requirements
    • Hazard Management
    • Hazard life cycle
    • Independent Assessment
    • Safety Plan
    • Safety Case
    • FMEA Example
    • DevSecOps
Powered by GitBook
On this page
  • Tools
  • Basics
  1. Robustness, PenTest, Fuzzy and Static Code Analysis

Fuzz Testing

Last updated 1 year ago

Barton Miller at the University of Wisconsin developed it in 1989.

Fuzz Testing or Fuzzing is a software testing technique of feeding invalid or random data called FUZZ into software systems to discover errors and security loopholes. The objective of fuzz testing is inserting data using automated or semi-automated techniques and testing the system for various exceptions like system hangs, crashes, performance degradations, or failure of built-in code.

It is usually applied in a case-by-case situation by specific tools or scripts adapted to the development/validation environment.

Tools

Free and commercial tools exist:

  • – tests for known and unknown vulnerabilities;

  • Spike Proxy – SQL injection, cross-site scripting - Example;

  • – for http and https;

  • – webservices, http / SOAP;

  • – American Fuzzy Lop;

  • – Uses real inputs and input files and then modifies them.

Basics

  • Automatically generates test cases. (Mutation or Generation based)

  • Many slightly anomalous test cases are input into a target interface.

  • The application is monitored for errors.

  • Inputs are:

    • file-based (.pdf, .png, .wav, .mpg), or;

    • network-based (ftp, http, SNMP, SOAP), or;

    • Other (e.g. crashme()).

Protocol-specific knowledge is very helpful.

  • Generational tends to be better than random, better specs knowledge makes better fuzzers.

More fuzzers are better:

  • Each implementation will vary, different fuzzers find different bugs.

  • The best is probably your own (with system knowledge).

The longer you run, the more bugs you find.

The best results come from guiding the process:

  • Notice where you getting stuck, use profiling!

  • Code coverage can be very useful for guiding the process.

Peach Fuzzer
Webscarab
OWASP WSFuzzer
AFL
Radamsa