Notes - MCS
Machine Learning Applied to Security
Notes - MCS
Machine Learning Applied to Security
  • Machine Learning Applied to Security
  • Machine Learning
    • AI and ML
    • Taxonomy
    • Limitations
    • Terminology
  • SPAM
    • SPAM
    • SPAM Detection
    • Classification Model
    • Naive Bayes (Discrete)
    • SPAM or HAM
    • Blind Optimization
    • Gradient descent
    • Linear Regression
    • Logistic Regression
    • Binary Classification
  • Anomaly Detection
    • Context
    • Anomaly Detection
      • Examples
      • Detection
      • Techniques
    • Detecting anomalies just by seeing
    • Unsupervised Learning
    • Autoencoders
    • Isolation Forest
    • Local Outlier Factor
    • One-Class SVM
    • Tips
  • Malware Detection
    • Context
    • Creeper virus
    • ILOVEYOU worm
    • CryptoLocker ransomware
    • Mirai botnet
    • Clop ransomware
    • How To Recognize Malware
    • Malware Detection
    • Machine Learning Approaches
    • Requirements
    • Multi-Class Classification
Powered by GitBook
On this page
  1. Machine Learning

Limitations

Last updated 1 year ago

  • Our model is a simplification of reality.

  • Simplification is based on assumptions (model bias).

  • Assumptions fail in certain situations.

Bias and Variance

Bias measures the model's ability to fit the training data accurately, while variance measures its ability to generalize to new, unseen data.

Bias, also known as underfitting, is a measure of how well a model can fit the training data. A high bias indicates that the model is too simplistic and cannot capture the underlying patterns in the data. It leads to poor performance on the training data.

Variance, also known as overfitting, is a measure of how well a model can generalize to new, unseen data. High variance suggests that the model is overly complex and fits the noise in the training data, making it perform poorly on unseen data.