Memory partitioning
Last updated
Last updated
After reserving some amount to the operating system, how to partition the real memory to accommodate the different processes?
Fix partition
into slices of equal size.
into slices of different size.
Dynamic partition.
being done as being required?
Main memory can be divided into a number of static slices at system generation time.
not necessarily all the same size.
The logical address space of a process may be loaded into a slice of equal or greater size.
thus, the largest slice determines the size of the largest allowable process.
Some features:
Simple to implement.
Efficient – little operating system overhead.
Fixed number of allowable processes.
Inefficient use of memory due to internal fragmentation – the part of a slice not used by a process is wasted.
If a slice becomes available, which of the SUSPENDED-READY processes should be placed there?
Two different scheduling policies are here considered.
Valuing fairness – the first process in the queue of SUSPENDED-READY processes whose address space fits in the slice is chosen.
Valuing the occupation of main memory – the first process in the queue of SUSPENDED-READY processes with the largest address space that fits in the slice is chosen.
to avoid starvation an aging mechanism can be used.
In dynamic partitioning, at start, all the available part of the memory constitutes a single block and then:
reserve a region of sufficient size to load the address space of the processes that arises.
release that region when it is no longer needed.
As the memory is dynamically reserved and released, the operating system has to keep an updated record of occupied and free regions.
One way to do this is by building two (bi)linked lists:
list of occupied regions – locates the regions that have been reserved for storage of the address spaces of processes resident in main memory.
list of free regions – locates the regions still available.
Memory is not allocated in byte boundaries, because:
useless, very small free regions may appear.
that will be included in the list of free regions.
making subsequent searches more complex.
Thus, the main memory is typically divided into blocks of fixed size and allocation is made in units of these blocks.
Valuing fairness is the scheduling discipline generally adopted, being chosen as the first process in the queue of SUSPENDED-READY processes whose address space can be placed in the main memory.
Dynamic partitioning can produce external fragmentation.
Free space is split in a large number of (possible) small free regions.
Situations can be reached where, although there is enough free memory, it is not continuous and the storage of the address space of a new or suspended process is no longer possible.
The solution is garbage collection – compacting the free space, and grouping all the free regions into a single one.
This operation requires stopping all processing and, if the memory is large, can have a very long execution time.
In case there are several free regions available, which one to use to allocate the address space of a process?
Possible policies:
first fit – the list of free regions is searched from the beginning until the first region with the sufficient size is found.
next fit – is a variant of the first fit which consists of starting the search from the stop point in the previous search.
best fit – the list of free regions is fully searched, choosing the smallest region with sufficient size for the process.
worst fit – the list of free regions is fully searched, choosing the largest existing region.
Which one is the best?
in terms of fragmentation.
in terms of efficiency of allocation.
in terms of efficiency of release.
Advantages:
general – the scope of application is independent of the type of processes that will be executed.
low complexity implementation – no special hardware is required and data structures are reduced to two (bi)linked lists.
Disadvantages:
external fragmentation – the fraction of the main memory that ends up being wasted, given the small size of the regions in which it is divided, can reach in some cases about a third of the total (50% rule).
inefficient – it is not possible to build algorithms that are simultaneously very efficient in allocating and freeing space