Notes - MIECT
Sistemas De Operação
Notes - MIECT
Sistemas De Operação
  • Sistemas de Operação
  • Processes in Unix/Linux
    • Process
    • Multiprocessing vs. Multiprogramming
    • Processes in Unix
    • Execution of a C/C++ program
  • Introduction to operating systems
    • Global view
    • Evolution of computational systems
    • Key topics
  • Semaphores and Shared memory
    • Concepts
    • Semaphores
    • Shared memory
    • Unix IPC primitives
  • Threads, mutexes and condition variables in Unix/Linux
    • Threads
      • In linux
    • Monitors
    • Unix IPC primitives
  • Processes
    • Process
      • Diagrams
    • Process control table
    • Context switching
    • Threads
  • Processor Scheduling
    • Processor Scheduler
    • Short-term processor scheduler
    • Scheduling algorithms
    • Scheduling criteria
    • Priorities
    • Scheduling policies
      • In Linux
  • Interprocess communication
    • Concepts
    • Philosopher dinner
    • Access primitives
      • Software solutions
      • Hardware solutions
    • Semaphores
    • Monitors
    • Message-passing
    • Unix IPC primitives
  • Deadlock
    • Introduction
    • Philosopher dinner - Solution 1
      • Deadlock prevention
    • Philosopher dinner - Solution 2
      • Deadlock prevention
    • Philosopher dinner - Solution 3
      • Deadlock prevention
    • Philosopher dinner - Solution 4
    • Deadlock avoidance
    • Deadlock detection
  • Memory management
    • Introduction
    • Address space
    • Contiguous memory allocation
    • Memory partitioning
    • Virtual memory system
    • Paging
    • Segmentation
    • Combining segmentation and paging
    • Page replacement
      • Policies
    • Working set
    • Thrashing
    • Demand paging vs. preparing
Powered by GitBook
On this page
  • Single threading
  • Multithreading
  • Structure of a multithreaded program
  • Implementations of multithreading
  • User level threads
  • Kernel level threads
  • Advantages of multithreading
  1. Threads, mutexes and condition variables in Unix/Linux

Threads

PreviousUnix IPC primitivesNextIn linux

Last updated 2 years ago

Single threading

In a traditional operating system, a process includes:

  • an address space (code and data of the associated program).

  • a set of communication channels with I/O devices.

  • a single thread of control, which incorporates the processor registers (including the program counter) and a stack.

However, these components can be managed separately.

In this model, thread appears as an execution component within a process.

Multithreading

Several independent threads can coexist in the same process, thus sharing the same address space and the same I/O context.

  • This is referred to as multithreading.

Threads can be seen as light weight processes.

Structure of a multithreaded program

Each thread is typically associated to the execution of a function that implements some specific activity.

Communication between threads can be done through the process data structure, which is global from the threads point of view.

  • It includes static and dynamic variables (heap memory).

The main program, also represented by a function that implements a specific activity, is the first thread to be created and, in general, the last to be destroyed.

Implementations of multithreading

User level threads

Threads are implemented by a library, at user level, which provides creation and management of threads without kernel intervention.

  • versatile and portable.

  • when a thread calls a blocking system call, the whole process blocks.

    • because the kernel only sees the process.

Kernel level threads

Threads are implemented directly at kernel level.

  • less versatile and less portable.

  • when a thread calls a blocking system call, another thread can be schedule to execution.

Advantages of multithreading

Easier implementation of applications - in many applications, decomposing the solution into a number of parallel activities makes the programming model simpler.

  • since the address space and the I/O context is shared among all threads, multithreading favors this decomposition.

Better management of computer resources – creating, destroying and switching threads is easier then doing the same with processes.

Better performance – when an application involves substantial I/O, multithreading allows activities to overlap, thus speeding up its execution.

Multiprocessing – real parallelism is possible if multiples CPUs exist.