Computer Science: Address Binding
10 Questions
0 Views

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

What is the primary goal of the Inverted Page Table Architecture?

  • To prioritize kernel processes over user processes
  • To optimize the TLB for faster hardware lookups
  • To reduce the overhead of page translation (correct)
  • To reduce the size of the translation storage buffer
  • What is the function of the TLB in the Oracle SPARC Solaris architecture?

  • To store the entire hash table
  • To manage the kernel and user process hash tables
  • To cache recently accessed page translations (correct)
  • To handle page faults and kernel interrupts
  • How does the CPU respond when a match is not found in the TLB?

  • It creates a new TTE and stores it in the TSB
  • It initiates a kernel interrupt to search the hash table (correct)
  • It generates a page fault and halts the system
  • It discards the virtual address reference and continues execution
  • What is the purpose of the translation storage buffer (TSB)?

    <p>To cache recently accessed page translations</p> Signup and view all the answers

    How do the kernel and user process hash tables differ?

    <p>The kernel hash table is larger and more complex</p> Signup and view all the answers

    What is the result of a successful TLB search?

    <p>The CPU completes the address translation</p> Signup and view all the answers

    What is the significance of the base address and span in each hash table entry?

    <p>They indicate the number of pages each entry represents</p> Signup and view all the answers

    How does the kernel respond when a match is not found in the TSB?

    <p>It creates a new TTE from the hash table and stores it in the TSB</p> Signup and view all the answers

    What is the purpose of hashing in the Oracle SPARC Solaris architecture?

    <p>To enable efficient mapping of virtual to physical memory addresses</p> Signup and view all the answers

    What is the benefit of having each hash table entry represent a contiguous area of mapped virtual memory?

    <p>It reduces the number of separate hash-table entries for each page</p> Signup and view all the answers

    Study Notes

    Address Binding

    • Addresses represented in different ways at different stages of a program's life:
      • Source code addresses are usually symbolic
      • Compiled code addresses are bound to relocatable addresses
      • Linker or loader binds relocatable addresses to absolute addresses
      • Each binding maps one address space to another

    Binding of Instructions and Data to Memory

    • Address binding of instructions and data to memory addresses can happen at three different stages:
      • Compile time
      • Load time
      • Execution time

    Compile-time Address Binding

    • Memory location known a priori, absolute code can be generated
    • Must recompile code if starting location changes

    Load-time Address Binding

    • Generate relocatable code if memory location is not known at compile time

    Execution-time Address Binding

    • Binding delayed until run time if the process can be moved during its execution from one memory segment to another
    • Need hardware support for address maps (e.g., base and limit registers)

    Logical vs. Physical Address Space

    • Logical address space is the set of all logical addresses generated by a program
    • Physical address space is the set of all physical addresses generated by a program
    • Logical and physical addresses are the same in compile-time and load-time address-binding schemes
    • Logical (virtual) and physical addresses differ in execution-time address-binding scheme

    Memory-Management Unit (MMU)

    • Hardware device that at run time maps virtual to physical address
    • The value in the relocation register is added to every address generated by a user process at the time it is sent to memory
    • Execution-time binding occurs when reference is made to location in memory
    • Logical address bound to physical addresses

    Dynamic Relocation using a Relocation Register

    • Routine is not loaded until it is called
    • Better memory-space utilization; unused routine is never loaded
    • All routines kept on disk in relocatable load format
    • No special support from the operating system is required

    Dynamic Linking

    • Static linking – system libraries and program code combined by the loader into the binary program image
    • Dynamic linking – linking postponed until execution time
    • Small piece of code, stub, used to locate the appropriate memory-resident library routine
    • Stub replaces itself with the address of the routine, and executes the routine

    Swapping

    • A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution
    • Total physical memory space of processes can exceed physical memory
    • Backing store – fast disk large enough to accommodate copies of all memory images for all users
    • Roll out, roll in – swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed

    Context Switch Time including Swapping

    • If next process to be put on CPU is not in memory, need to swap out a process and swap in target process
    • Context switch time can then be very high
    • Can reduce if reduce size of memory swapped – by knowing how much memory really being used### Page Table Implementation
    • Each page table entry takes memory to track
    • Page sizes are growing over time, e.g., Solaris supports 8 KB and 4 MB page sizes
    • The process view and physical memory are now very different
    • Each process can only access its own memory by implementation

    Page Table Structure

    • The page table is kept in main memory
    • Page-table base register (PTBR) points to the page table
    • Page-table length register (PTLR) indicates the size of the page table
    • Every data/instruction access requires two memory accesses: one for the page table and one for the data/instruction
    • This two-memory-access problem can be solved using a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs)

    Translation Look-Aside Buffers (TLBs)

    • TLBs store address-space identifiers (ASIDs) in each TLB entry to uniquely identify each process and provide address-space protection
    • Otherwise, the TLB would need to be flushed at every context switch
    • TLBs are typically small (64 to 1,024 entries)
    • On a TLB miss, the value is loaded into the TLB for faster access next time
    • Replacement policies must be considered, and some entries can be wired down for permanent fast access

    Associative Memory

    • Associative memory allows for parallel search
    • Address translation (p, d) involves searching for p in associative registers; if found, the frame number is retrieved
    • Otherwise, the frame number is retrieved from the page table in memory

    Paging Hardware with TLB

    • Associative lookup takes ε time units
    • Hit ratio (α) is the percentage of times a page number is found in the associative registers, which relates to the number of associative registers
    • Effective access time (EAT) = (1 + ε)α + (2 + ε)(1 – α)
    • For example, with a hit ratio of 80% and ε = 20ns, EAT = 120ns
    • With a more realistic hit ratio of 99%, EAT = 101ns

    Memory Protection

    • Memory protection is implemented by associating a protection bit with each frame to indicate if read-only or read-write access is allowed
    • A valid-invalid bit is attached to each entry in the page table: "valid" indicates the page is in the process' logical address space, and "invalid" indicates the page is not in the process' logical address space
    • Any violations result in a trap to the kernel

    Shared Pages

    • Shared code allows multiple processes to share the same copy of read-only (reentrant) code
    • Private code and data have each process keeping a separate copy
    • Shared pages can be useful for interprocess communication if sharing of read-write pages is allowed

    Hierarchical Paging

    • Breaking up the logical address space into multiple page tables can reduce memory usage
    • A two-level page table is a simple technique to implement hierarchical paging
    • We then page the page table

    Hashed Page Tables

    • Hashed page tables are used in address spaces > 32 bits
    • The virtual page number is hashed into a page table, which contains a chain of elements hashing to the same location
    • Each element contains the virtual page number, the mapped page frame, and a pointer to the next element

    Inverted Page Tables

    • Inverted page tables track all physical pages, rather than each process having a page table
    • Each entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page
    • Decreases memory needed to store each page table, but increases time needed to search the table when a page reference occurs
    • A hash table can be used to limit the search to one – or at most a few – page-table entries

    Studying That Suits You

    Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

    Quiz Team

    Description

    This quiz covers different stages of address binding in a program's life cycle, including source code, compiled code, and absolute addresses.

    More Like This

    Use Quizgecko on...
    Browser
    Browser