|
|
|
|
At any given point in time, multiple processes
execute |
|
Each process needs main memory, operating system
manages memory allocation |
|
Two issues: |
|
sharing available main memory among multiple
processes |
|
virtual memory: using secondary memory to give
impression that main memory is larger and can therefore accommodate more
processes |
|
Sharing available memory: partitioning, memory
overlays, swapping, etc. |
|
Virtual memory: paging, segmentation (these
could also be used as techniques for sharing available memory) |
|
|
|
|
The memory is divided into N partitions |
|
The size of a partition is arbitrary, but once a
partition is created, the size is fixed |
|
We maintain a queue for each partition |
|
Any space in a partition not used by a process
is lost (internal fragmentation) |
|
Used by OS/360 (MFT) |
|
|
|
|
Each process is assigned a partition, however,
the number, size, and location of the partition can vary. |
|
|
|
|
First-fit: allocate the first hole that is big
enough |
|
Best-fit: allocate the smallest hole that is big
enough |
|
Worst-fit: allocate the largest hole |
|
|
|
|
The overhead to keep track of small holes will
be substantially larger than the hole itself. |
|
The best-fit is slower than the first-fit
because it must search the entire list. |
|
The best-fit tends to fill up memory with tiny
useless holes. |
|
The first-fit generates larger holes on average |
|
Simulation have shown that the first-fit and the
best-fit give better results, with the first-fit being faster. |
|
To speed up these algorithms, we can maintain
separate lists for processes and holes |
|
To make the best-fit faster, we can maintain the
hole list by size. |
|
|
|
|
A memory management scheme that allows a process
to occupy non-contiguous allocation in memory |
|
The logical address space is divided into pages. |
|
The physical memory is divided into page frames |
|
|
|
|
|
|
|
Logical Address
= Page Address +
Offset |
|
148 = 100 + 48 |
|
|
|
Physical Address = Frame Address + Offset |
|
348 = 300 + 48 |
|
|
|
|
|
page = logical-address / page-size |
|
offset
= logical-address mod page-size |
|
|
|
page-frame = page-table(page) |
|
Physical address = (page-frame * page-size) + offset |
|
|
|
page = 10101 / 10000 = 1 |
|
offset = 10101 mod 10000 = 0101 |
|
|
|
page-frame = page-table(1) = 11 |
|
physical address = (11 * 10000) + 0101 = 110101 |
|
|
|
|
There is no external fragmentation since every
page frame can be used. |
|
There is internal fragmentation since the last
page in the logical space of every process may not be completely used (on
average: half a page per process) |
|
A small page size means less internal
fragmentation. |
|
A large page size means less overhead, both in
page table size and page transfer time. |
|
|
|
|
|
A set of dedicated registers |
|
Reasonable only for small page tables |
|
Example: |
|
16 bit logical address space |
|
page size = 8K |
|
page table size = 8 |
|
Memory |
|
use a PTBR to point to the page table of the
current process |
|
2 memory accesses are needed for every data
access |
|
speed could be intolerable |
|
Associative registers |
|
Associative registers or translation look-aside
buffers (TLBs) |
|
Only few page table entries are kept in the
associative registers |
|
The hit ratio is the percentage of times that a
page number is found in the associative registers |
|
|
|
|
Memory access time = 60 nanoseconds |
|
Associative registers time = 20 nanosec |
|
Hit ratio = 80% |
|
|
|
Effective access time = |
|
0.80 * (20 + 60) + 0.20 * (20 + 60 + 60)= |
|
92 nanoseconds |
|
|
|
|
Page tables can be huge: |
|
32-bit virtual address space |
|
4K page size |
|
|
|
page table size = 232 / 212
= 220 entries |
|
|
|
Over 1 million entries! |
|
|
|
|
|
|
|
|
It is a technique that allows the execution of a
process without having to allocate it the memory space that it needs,
completely. |
|
Various parts of a process are swapped into
memory as they become needed during execution. |
|
Advantages: |
|
Programs can be larger than the size of the
physical memory. |
|
virtual memory provides a logical view of a
memory that is larger than its physical size. |
|
Demand Paging: |
|
Most virtual memory systems use paging. |
|
A page is not swapped into memory until it is
needed |
|
|
|
|
|
A valid / invalid bit is used to indicate
whether a page is currently in memory or not. |
|
A page fault occurs whenever the page referenced
by the process is not in memory |
|
Steps in handling a page fault: |
|
a trap is generated |
|
a page frame is allocated |
|
a page is read into the page frame |
|
the page table is updated |
|
the instruction is restarted |
|
|
|
|
|
With an over-allocated memory there will always
be a need to replace the pages in memory. |
|
A page replacement algorithm must choose a page
to replace in such a way that it minimizes the page fault rate. |
|
Page Replacement Algorithms: |
|
FIFO |
|
Optimal |
|
Not Recently Used (NRU) |
|
Second Chance |
|
Least Recently Used (LRU) |
|
|
|
|
When a page frame is needed, we will choose to
replace the page that has been the longest in memory. |
|
|
|
|
|
Replace the page that will take the longest time
to be referenced again. |
|
It is the best page replacement algorithm |
|
Impossible to implement. |
|
Can be used to measure how effective other
scheduling algorithms are. |
|
|
|
|
|
|
Replaces one of the pages that has not been
recently used |
|
Two status bits are associated with each page in
memory. |
|
The R bit is set when the page is referenced |
|
The M bit is set when the page is modified |
|
The pages can be divided into 4 categories: |
|
not referenced, not modified |
|
not referenced, modified |
|
referenced, not modified |
|
referenced, modified |
|
|
|
|
It uses a FIFO replacement strategy. However, it
will evict the oldest page only if its R bit is set to 0. |
|
The algorithm avoids evicting a heavily used
page that happens to be the oldest in memory. |
|
|
|
|
|
|
Associates with each page the time when it was
last used |
|
Replace the page that has not been used for the
longest time |
|
Realizable, but not cheap |
|
Simulating the LRU Algorithm: Aging |
|
associate a byte with each page in memory |
|
periodically, shift all the bits to the right
and inserts the reference bit into the high-order bit |
|
the page with the lowest number is the least
recently used page |
|
|
|
|
|
The working-set model |
|
Allocation of page frames |
|
Paging daemons |
|
A background process that periodically inspects
memory looking for pages to evict based on the page replacement algorithm |
|
The contents of the page may be remembered in
case the page is needed again before it gets overwritten |
|
Locking pages |
|
A page is locked in memory to prevent its
replacement. |
|
Page locks are needed to allow I/O operations to
complete before the page to which I/O is being done gets replaced |
|
An alternative is to use system buffers to do
I/O. |
|
Page sharing |
|
Page size |
|
|
|
|
Most processes exhibit the locality of reference
property |
|
The working set of a process consist of all the
pages that a process is currently using |
|
Thrashing occurs when a process has a high rate
of page faults |
|
|
|
|
local allocation : when a process page faults,
it replaces one of its own pages |
|
global allocation : when a process page faults,
any page in memory may be replaced |
|
|
|
|
|
Segmentation is a memory management scheme that
supports a logical view of the address space of a process as made up of
several different size segments |
|
A segmented memory has the following advantages: |
|
simplifies the handling of data structures that
are growing or shrinking |
|
the linking up of procedures compiled separately
is greatly simplified |
|
facilitates sharing of data or procedures
between processes |
|
procedures and data segments can be
distinguished and separately protected |
|
|
|
|
|
The programmer is aware of this technique:
address space becomes two-dimensional (segment + offset) instead of the
linear, one-dimensional address space that paging provides. |
|
Every segment has a linear address space. |
|
Segmentation can be used to implement virtual
memory. |
|
Procedures and data can be distinguished and
separately protected. |
|
Segmentation can easily accommodate program
structures whose size fluctuates. |
|
It is easy to share procedures between users |
|
|
|