Notes - MIECT
Redes E Sistemas Autónomos
Notes - MIECT
Redes E Sistemas Autónomos
  • Redes e Sistemas Autónomos
  • Peer-to-Peer Systems and Networks
    • Content Distribution Networks
    • Peer-to-peer networks
      • Types
    • Structured vs Unstructured
    • Fully Decentralized Information System
    • FastTrack/KaZaA
    • OpenNAP/Napster
    • BitTorrent
  • InterPlanetary File System (IPFS)
    • IPFS
      • Bitswap
    • Connecting an IPFS node to the P2P network
    • Searching in DHTs (Structured)
    • File Search
    • Security
  • Ad-Hoc Networks
    • Mobile Ad-hoc networks
    • Application Scenarios
    • Routing
      • AODV - Ad Hoc On-Demand Distance Vector Routing
      • OLSR - Optimized Link State Routing Protocol
      • LAR – Location Aided Routing
      • Batman
    • IP Address Assignment
  • Self-organized systems: Data, learning and decisions
    • Use Cases and Data
    • Machine Learning
      • Supervised Learning
      • Neural Networks
      • Reinforcement Learning
      • Unsupervised Learning: K-means
    • Learning
  • Vehicular Networks
    • Vehicular Ad Hoc Networks
    • How do they work?
    • SPAT: Signal Phase And Timing
    • MAP: MAP
    • Manoeuvre Coordination Message (MCM)
    • Communication Technologies
  • QoS and Security
    • TCP- and UDP-based applications
      • TCP-Cubic
    • QUIC
    • TCP-Vegas
    • Classification of Transport protocols
    • Exploiting Buffering Capabilities
    • QoS in UDP: trade-offs
    • Transmission Quality (Batman v.3)
    • QoS-OLSR
    • Security
      • Key Management
      • RSA (Rivest-Shamir-Adleman) Key
      • Key Management in ad-hoc networks
      • Self-organized public key management (SOPKM)
      • Self-securing ad-hoc wireless networks (SSAWN)
Powered by GitBook
On this page
  • Training stage
  • Inference stage
  1. Self-organized systems: Data, learning and decisions
  2. Machine Learning

Supervised Learning

PreviousMachine LearningNextNeural Networks

Last updated 2 years ago

Training stage

  1. A dataset with input data and the corresponding output.

  2. The input data is pre-processed to identify and/or extract relevant features.

  3. The feature data is input to the ML method, typically one feature set (e.g: mean, median, std, etc).

    1. Sometimes a blind set of features is produced, and then only the most relevant are selected (e.g: decision trees).

  4. For each input set, the method produces an estimated likeliness of an error occurrence.

  5. The method compares the estimate with the actual process output (the Ground Truth) and updates the model's internal processes to improve the accuracy of the estimates.

  6. The process is replicated until the performance of the method is within acceptable bounds.

Inference stage

  1. The trained model is deployed in its target setting.

  2. Given inputs, it can produce estimates of the process output.

  3. However, the method no longer has access to the ground truth and is thus unable to further its learning.