Memory

Memory

Reading

Intro

A computer's non-persistent memory is often called RAM (random access memory). It's primary distinctions from secondary storage are: The operating system must manage RAM, cooperatively with the CPU and the chipset. The unit within the OS that does this is the Memory Manager. Note that sometimes a memory manager is called an MMU (memory management unit), which can be confusing because the same terminology is used for the portion of the processor that supports memory operations. We will try to keep the distinction clear by only using the abbreviation MMU when we refer to hardware, and use the term Memory Manager when referring to a component of the OS.

Basic Principles

Recall that the two primary roles of an operating system are (1) to provide useful abstractions and (2) to implement those abstractions efficiently on the hardware. For memory, an address space is the abstraction. It represents the set of addresses that a process can access, and is used to separate conceptually the memory as seen by a process from physical memory. The issue then becomes how to implement address spaces, and how to map them into physical memory. To do this raises many questions: These are all questions we'll address.

Modern operating systems support the ability to have more than one process running at a time (multiprogramming), meaning that the addresses spaces of multiple processes (or parts of them) will be in memory at the same time. This means that at the very least, the following two problems need to be solved:

Fragmentation and Compaction

The above illustrates a general problem called fragmentation, which is related to the existence of unused areas of memory. Here, we'll talk about in the context of memory systems, but it can occur in any storage system. It comes in two varieties: In the previous example, external fragmentation occurs. This can be reduced through occasional memory compaction. Compaction involves moving all the allocated areas of memory into one contiguous region, eliminating areas of external fragmentation. An OS will try to avoid or minimize the need for compaction, because of its very high CPU and energy cost.

One variation that can reduce the frequency of compaction is to allocate each process a little more memory than it actually needs. Within the allocated region, the process can gradually grow to fill its memory area, as in the following figure. Note that this strategy also introduces internal fragmentation.

(a) Allocating space for a growing data segment.
(b) Allocating space for a growing stack and a growing data segment.
(image: Tanenbaum)

Review of Process Address Space Organization

The Executable Stack

A program's local variables will generally be allocated on the stack if they are simply declared. When the program is compiled, the compiler adds instructions that create local storage space on a function's stack frame. When the function is called, the space is allocated within the stack frame. When the function completes and returns, the space used by that function is de-allocated right off the top of the stack. This makes sense because the function's local variables go out of scope when the function returns. This type of allocation is convenient for two reasons: There are two exceptions: global variables and static variables. Since global and static variables are always in scope, they must be allocated on the heap, along with areas explicitly allocated using calls like malloc.

The Heap

The heap is a region of memory or address space that is assigned to a process. The program may allocate and deallocate memory on the heap at any time, but must do so explicitly. Memory that is allocated but not explicitly deallocated may cause leaks, which are unusable regions of memory that can accumulate. Some operating systems or runtime environments use a garbage collector, which finds and reclaims unused memory regions, to mitigate this problem.

Various operating systems will have their own API calls for memory allocation and deallocation. In the C runtime, we use the library calls like new, calloc(), malloc() and free(). If you'd like to see your Linux system kill a process that has allocated too much memory, open up System Monitor and try the following simple demo:
#include <stdlib.h>

int main(void)
 {
  char *c;
  while (1) c = (char *)malloc(1024 * 1024);
  exit(0);
 }

Segmentation: Variable Sized Address Spaces



The idea of splitting up a process' address space into multple pieces that can be separately allocated and managed is called segmentation. While the pieces are still variable sized, they're smaller than a process' entire address space and hence, easier to place and manage. This is also one way of addressing the problem that, in real programs, there are often multiple allocated memory regions that change in size over time.

Multiple allocated regions of memory may need to vary in size over time.
(image: Tanenbaum)


Each segment has a base and a limit that define its size. Because varying sizes are possible, internal fragmentation can be minimized.

Segments can grow or shrink independently.
(image: Tanenbaum)

Base and Limit Registers

Two of the questions from above were: (1) how are addresses translated from logical addresses to physical addresses, and (2) how do we prevent an executing process from accessing memory outside of its alloted region? A simple way to do both in segmented memory address spaces is to employ base and limit registers.
Base and limit registers can be used to give each process a separate address space.
(image: Tanenbaum)


Segments are visible to the programmer (or at least the compiler). Executable code that references segmented memory must include the identifier of the segment being referenced, plus the offset. Each segment can also have its own protection values. For example, a data segment would be read/write, but a code segment would be read-only. There may be a kernel segment and a user segment.

Allocation of variable-sized regions leads to external fragmentation. Eventually, if there are enough small, unused external fragments, and there is a need for more memory allocation, compaction is required. Compaction is the process of copying entire blocks of allocated memory into more tighly spaced contiguous blocks.

External fragmentation develops over time, then is corrected through memory compaction by the OS.
(image: Tanenbaum)

Descriptor Tables and Address Calculation

A generalization of the base register+limit approach is to have a table called a segment table, sometimes called a descriptor table because each entry is a descriptor. The table is essentially implmented in hardware and maintained by the OS. They can exist both globally and for individual processes. The Intel architecture uses the term Global Descriptor Table for the OS-wide segments, and the term Local Descriptor Tables for segments particular to a process.

The following figure illustrates how a physical address is calculated from a logical address in a system that uses segmentation; note that this is done in hardware:
The offset bits are carried forward when converting a logical address to a physical address using segmented memory. The base address of the referenced segment needs to be pulled from the segment table. Then, the base + offset equals the desired memory address. Unique to segmentation, the hardware and OS must also check that the offset bits do not exceed the segment's upper bound or limit, which would cause a segfault.

Virtual Memory: Fixed Size Memory Address Spaces

Intro

Having a fixed sized unit of memory allocation simplifies many issues related to memory management. This is the key difference between a system that uses segmentation and a system that uses virtual memory. In the latter, fixed-sized units called pages are loaded into fixed-sized regions of memory of the same size called page frames. Pages are usually not that large; 4K is a commonly used size. By requiring all pages to be this size, and allowing non-contiguous use of the pages, all external fragmentation is eliminated.

A second key issue is that often in real computing systems, one process, or many processes together, need to use more memory than is physically available. Virtual memory also solves this problem. The basic idea behind virtual memory is this:

Virtual memory thus requires the "translation" of virtual addresses to physical addresses whenever a memory reference is made:

Virtual Address Translation

In a virtual memory system, the page table maps a virtual page number to a physical frame number. To calculate a physical address, the virtual page number must be used to check in the page table to get the right frame number where that virtual page currently resides. Once the frame number is obtained, the address of the start of the frame is combined (conatenated) with the offset bits to obtain the right physical address.
Translation of a virtual address to a physical address, using a Page Table.
Note the offset bits do not change; they are carried forward.

Page Tables

This mapping between pages and frames occurs in the Page Table. For each virtual page in the Page Table, it indicates what physical frame that page currently resides in, or indicates that the page is not currently loaded into a physical memory frame. A page's presence or absence in physical memory is tracked using a present/absent bit. If this bit is 0, the page is not loaded in a frame. If the bit is 1, the page is currently loaded into the indicated frame number.
An example page table. The page size is 4K. The number in the page table entry indicates the memory frame where the page is loaded, or 'X' if the page is not currently in physical memory. The page table will change over time, as virtual memory pages are loaded into frames or swapped out to secondary storage (disk). Note the virtual memory size is 64K, while physical memory is only 32K. (image: Tanenbaum)
If a memory reference occurs on a page that's not currently in a main memory frame (e.g., the present/absent bit is 0), a page fault occurs. This signals the OS that the calling process will not be able to continue until the needed page is swapped in from disk.

Virtual addresses are mapped to physical addresses as follows:

The process is illustrated in the following figure:
Translation of a virtual address (at bottom) to a physical address (at top), using a Page Table.
(image: Tanenbaum)
A Page Table Entry (PTE) usually contains other important information, as well:

A typical page table entry. (image: Tanenbaum)


See Page Table Entries featured in this YouTube clip from the movie The Social Network.

Virtual Memory and Page Table Entry Calculations

The following relationships exist between the virtual address, the physical address, the page size, and the page table:

The number of bits used for each part of an address is important. Here's more on that:

Advantages

Recall that Relocation is the idea that a program's allocated memory may be in a different physical location each time the program is loaded, or each time a particular page is swapped in from disk. Segmentation allowed for this by varying the "base" register; virtual memory naturally provides this through the page table, which allows page to go anywhere in memory.

Virtual memory also presents multiple advantages that are not true with segmentation:

For these reasons, virtual memory has become more dominant over time.

Challenges with Virtual Memory

Virtual Memory, as described so far, has two primary limitations: In this section, we discuss several optimizations that address these two challenges.

TLB: a Cache for Page Table Lookups

The solution to the two-memory-lookups problem lies in hardware. The construct is called a Translation Lookaside Buffer (TLB). A TLB is basically just a cache for page table lookups. The type of cache used has fully associative memory. Recall from computer architecture that this allows the MMU, in essence, to examine all rows of the cache (TLB) in parallel. Entries in the TLB look like PTEs, only there are fewer of them. Thanks to spatial and temporal locality, though, it's possible to have a high "TLB hit rate." If the PTE for the referenced page is in the TLB, it's a TLB hit. Otherwise, it's a TLB miss, and the MMU goes to the full Page Table, in RAM, as previously described, to find the right frame. Although the TLB is a hardware construct, management of the TLB is often done cooperatively between the OS and the CPU. Exact details are specific to the architecture and the OS.

When we include the TLB, the overall process of translating a virtual address to a physical one now looks like this:
Address calculation, with the TLB included.

Multi-Level Page Tables



One solution of the "huge page table" problem is to split it up into multiple levels, like a tree. In this scheme, the first few bits are an index into the top level page table, the second few bits are an index into the second-level page table, etc. The concept can be extended to any number of levels, but typically two to four will suffice. The actual Page Table Entries (PTEs) are found at the last level (the leaves of the tree). A Multi-level Page Table is illustrated in the following figure:
(a) Virtual address format. (b) Multi-level Page Table.
(image: Tanenbaum)


The drawback of Multi-level Page Tables is that the more layers they have, the more memory lookups are required for a single memory reference. The silver lining is that a TLB hit alleviates the need for all the lookups, so a respectably sized TLB reduces this penalty.

Inverted Page Tables

The other common solution to the "huge page table" problem is the Inverted Page Table. Because virtual memory is usually much larger than physical memory, most of a traditional Page Table's entries are empty (not "present"). Instead of holding a row for each virtual memory page, the Inverted Page Table has a row for each physical memory frame.

The drawback to the Inverted Page Table is that looking up a frame by its page number is no longer a direct lookup. Instead of looking directly into entry 92 of the Page Table to find what frame holds page 92, we need to search the Inverted Page Table, in effect asking, "does any frame hold page 92?". This limitation is mitigated in two ways. The first is the TLB, of course. The second method, should a memory lookup be needed, is a mapping function, such as a hash function. If there is one entry in the Inverted Page Table for each real memory frame, there will be about one entry per row (assuming few hash collisions). The only wrinkle comes when you do have hash collisions, which are handled, as usual, via some chaining function.

The Inverted Page Table is illustrated in the following diagram:

A traditional Page Table (left) requires 252 entries, in this example.
The equivalent Inverted Page Table (right) requires only 218 entries.
The relative size ratio is 234, or about 17 billion times smaller.
Yet, a lookup still takes O(1) time.
(image: Tanenbaum)

The Buddy System: Powers of Two

Besides segmentation and virtual memory, there is another memory address scheme that is sometimes used, called the Buddy System, or Buddy Memory Allocation. Memory can be allocated in different size units according to powers of two, which helps reduce inefficiency by allocating memory regions whose size more closely matches the requirement. In a Buddy System implementation: The buddy system has been used in kernel memory allocation in both Unix and Linux. Note, however, that it doesn't translate well to virtual memory, so its use is primarily relegated to physical RAM.
Memory regions are structed as a binary tree. In this example, L=16 and U=20. (Image: Stallings)
Smaller regions may be coalesced into larger ones as adjacent ones are freed. (Image: Stallings)
Here is a reference to the Buddy System being implemented in software libraries that support an AMD GPU device.

Summary / Key Points