Introduction

A process must have its address space, at least partially, resident in the main memory to be executed.

In a multiprogramming environment, to maximize processor utilization and improve response time (or turnaround time), a computer system must maintain the address spaces of multiple processes resident in the main memory.

But, there may not be room for all.

  • because, although the main memory has been growing over the years, it is a fact that “data expands to fill the space available for storage”

Memory hierarchy

Ideally, an application programmer would like to have infinitely large, infinitely fast, non-volatile, and inexpensively available memory.

  • In practice, this is not possible.

Thus, the memory of a computer system is typically organized at different levels, forming a hierarchy.

  • cache memory – small (tens of KB to some MB), very fast, volatile and expensive.

  • main memory – medium size (hundreds of MB to hundreds of GB), volatile and medium price and medium access speed.

  • secondary memory – large (tens, hundreds or thousands of GB), slow, non-volatile and cheap.

The cache memory will contain a copy of the memory positions (instructions and operands) most frequently referenced by the processor in the near past.

  • The cache memory is located on the processor’s own integrated circuit (level 1).

  • And on an autonomous integrated circuit glued to the same substrate (levels 2 and 3).

  • Data transfer to and from main memory is done almost completely transparent to the system programmer.

Secondary memory has two main functions:

  • File system – storage for more or less permanent information (programs and data).

  • Swapping area – Extension of the main memory so that its size does not constitute a limiting factor to the number of processes that may currently coexist.

    • the swapping area can be on a disk partition used only for that purpose or be a file in a file system.

This type of organization is based on the assumption that the further an instruction or operand is away from the processor, the less times it will be referenced.

  • In these conditions, the mean time for a reference tends to be close to the lowest value.

Based on the principle of locality of reference.

  • The tendency of a program to access the same set of memory locations repetitively over a short period of time.

Role

The role of memory management in a multiprogramming environment focuses on allocating memory to processes and on controlling the transfer of data between main and secondary memory (swapping area), in order to:

  • Maintaining a register of the parts of the main memory that are occupied and those that are free.

  • Reserving portions of main memory for the processes that will need it, or releasing them when they are no longer needed.

  • Swapping out all or part of the address space of a process when the main memory is too small to contain all the processes that coexist.

  • Swapping in all or part of the address space of a process when main memory becomes available.

Last updated