Podcast
Questions and Answers
What does memory management in an operating system determine?
What does memory management in an operating system determine?
What is the purpose of base and limit registers in memory management?
What is the purpose of base and limit registers in memory management?
To define the range of legal addresses that a process may access and ensure that the process can only access those legal addresses.
Logical addresses and physical addresses are always the same in the execution-time address-binding scheme.
Logical addresses and physical addresses are always the same in the execution-time address-binding scheme.
False
In the simplest dynamic relocation scheme, the relocation register adds a value to every ______ generated by a user process.
In the simplest dynamic relocation scheme, the relocation register adds a value to every ______ generated by a user process.
Signup and view all the answers
Where can the operating system be placed in contiguous memory allocation?
Where can the operating system be placed in contiguous memory allocation?
Signup and view all the answers
What does the relocation-register scheme help the operating system do?
What does the relocation-register scheme help the operating system do?
Signup and view all the answers
What are the memory allocation strategies discussed in the text?
What are the memory allocation strategies discussed in the text?
Signup and view all the answers
What is external fragmentation in memory management?
What is external fragmentation in memory management?
Signup and view all the answers
How is the problem of internal fragmentation avoided in memory allocation?
How is the problem of internal fragmentation avoided in memory allocation?
Signup and view all the answers
What is the purpose of the high-order m-n bits in a logical address in the paging model of memory management?
What is the purpose of the high-order m-n bits in a logical address in the paging model of memory management?
Signup and view all the answers
Explain what the n low-order bits in a logical address designate in the paging model.
Explain what the n low-order bits in a logical address designate in the paging model.
Signup and view all the answers
What is the purpose of a translation look-aside buffer (TLB) in memory management?
What is the purpose of a translation look-aside buffer (TLB) in memory management?
Signup and view all the answers
Paging eliminates external fragmentation.
Paging eliminates external fragmentation.
Signup and view all the answers
In a memory management scheme, if a TLB miss occurs, a memory reference to the ____________ must be made.
In a memory management scheme, if a TLB miss occurs, a memory reference to the ____________ must be made.
Signup and view all the answers
What happens when an attempt is made to write to a read-only page in memory?
What happens when an attempt is made to write to a read-only page in memory?
Signup and view all the answers
What additional bit is generally attached to each entry in the page table?
What additional bit is generally attached to each entry in the page table?
Signup and view all the answers
Inverted Page Tables store one entry for each real page of memory.
Inverted Page Tables store one entry for each real page of memory.
Signup and view all the answers
Shared memory is usually implemented using multiple virtual addresses mapped to one ________ address.
Shared memory is usually implemented using multiple virtual addresses mapped to one ________ address.
Signup and view all the answers
What is dynamic loading?
What is dynamic loading?
Signup and view all the answers
What is the advantage of dynamic loading?
What is the advantage of dynamic loading?
Signup and view all the answers
What is a stub in dynamic linking?
What is a stub in dynamic linking?
Signup and view all the answers
Dynamic linking generally requires help from the operating system. True or False?
Dynamic linking generally requires help from the operating system. True or False?
Signup and view all the answers
What is roll out roll in in the context of swapping?
What is roll out roll in in the context of swapping?
Signup and view all the answers
What is demand paging?
What is demand paging?
Signup and view all the answers
What are the components of hardware support for demand paging?
What are the components of hardware support for demand paging?
Signup and view all the answers
Effective access time is only affected by memory access time and is not influenced by page fault time.
Effective access time is only affected by memory access time and is not influenced by page fault time.
Signup and view all the answers
In demand paging, if a page fault occurs during instruction fetch, the instruction is fetched ____.
In demand paging, if a page fault occurs during instruction fetch, the instruction is fetched ____.
Signup and view all the answers
What is segmentation in memory management?
What is segmentation in memory management?
Signup and view all the answers
What does a segment in memory management represent?
What does a segment in memory management represent?
Signup and view all the answers
Segmentation architecture uses a segment table to map two-dimensional physical addresses.
Segmentation architecture uses a segment table to map two-dimensional physical addresses.
Signup and view all the answers
The segment table entry contains a base that specifies the starting physical address where the segment resides and a limit that specifies the ________ of the segment.
The segment table entry contains a base that specifies the starting physical address where the segment resides and a limit that specifies the ________ of the segment.
Signup and view all the answers
Match the following memory management components to their descriptions:
Match the following memory management components to their descriptions:
Signup and view all the answers
What does LRU stand for in the context of page replacement algorithms?
What does LRU stand for in the context of page replacement algorithms?
Signup and view all the answers
What is the key distinction between the FIFO and OPT algorithms in page replacement?
What is the key distinction between the FIFO and OPT algorithms in page replacement?
Signup and view all the answers
LRU replacement algorithm looks forward in time for page replacements.
LRU replacement algorithm looks forward in time for page replacements.
Signup and view all the answers
LRU replacement algorithm associates with each page the time of that page’s last ___.
LRU replacement algorithm associates with each page the time of that page’s last ___.
Signup and view all the answers
What technique is typically used by operating systems to allocate pages?
What technique is typically used by operating systems to allocate pages?
Signup and view all the answers
Vfork() suspends the child process while the parent process continues execution.
Vfork() suspends the child process while the parent process continues execution.
Signup and view all the answers
What is an extremely efficient method of process creation where no copying of pages takes place?
What is an extremely efficient method of process creation where no copying of pages takes place?
Signup and view all the answers
Page replacement involves finding a ____ frame to use.
Page replacement involves finding a ____ frame to use.
Signup and view all the answers
Match the page replacement algorithm with its characteristics:
Match the page replacement algorithm with its characteristics:
Signup and view all the answers
Study Notes
Operating Systems
Memory Management
- Memory management determines what is in memory and when, to optimize CPU utilization and computer response to users.
- It involves:
- Keeping track of which parts of memory are currently used and by whom.
- Deciding which processes or parts thereof and data to move into and out of memory.
- Allocating and deallocating memory space as needed.
Memory Management Strategies
- Protection of memory space is accomplished by having the CPU hardware compare every address generated in user mode with the base and limit registers.
- Any attempt to access operating system memory or other users' memory results in a trap to the operating system.
Address Binding
- Addresses can be bound to memory addresses at:
- Compile time: Absolute code is generated, and the process is bound to a specific memory location.
- Load time: Relocatable code is generated, and binding is delayed until load time.
- Execution time: Binding is delayed until run time, and special hardware is required to work.
Logical and Physical Address Spaces
- Logical addresses are generated by the CPU, and physical addresses are seen by the memory unit.
- In execution-time address binding, logical and physical addresses differ.
- The memory management unit (MMU) maps virtual addresses to physical addresses.
Dynamic Relocation and Dynamic Loading
- Dynamic relocation uses a relocation register to add a base value to every address generated by a user process.
- Dynamic loading loads a routine only when it is called, to optimize memory usage.
Dynamic Linking and Shared Libraries
- Dynamic linking postpones linking until execution time, allowing multiple programs to share the same library code.
- A stub is included in the program image, which checks if the needed routine is in memory, and loads it if necessary.
Swapping
- Swapping temporarily swaps a process out of memory to a backing store and brings it back into memory for continued execution.
- Swapping is used in multiprogramming environments to optimize memory usage and CPU utilization.
Swapping Implementation
- Swapping requires a backing store, typically a fast disk.
- The system maintains a ready queue of processes, and the dispatcher checks if the next process is in memory, swapping out a process if necessary.
- Context-switch time in a swapping system is fairly high.### Memory Management
- Context switching time is the time it takes to switch between processes, which includes swapping the process out of memory and swapping a new process in.
- The swap time is affected by the transfer time, which is directly proportional to the amount of memory swapped.
Swapping
- Swapping is a method of memory management that involves temporarily transferring a process from main memory to a secondary storage (backing store) to free up space.
- Swapping is a time-consuming process and can be a major bottleneck in system performance.
- Swapping is necessary when the main memory is full and a new process needs to be loaded.
Factors Affecting Swapping
- Pending I/O operations can prevent a process from being swapped out.
- I/O operations may be asynchronously accessing the user memory for I/O buffers, which can cause problems if the process is swapped out.
- Solutions to this problem include never swapping a process with pending I/O or executing I/O operations only into operating-system buffers.
Modified Swapping
- Modified versions of swapping are used in many systems, including some versions of UNIX.
- In these systems, swapping is normally disabled but will start if many processes are running and are using a threshold amount of memory.
- Swapping is halted when the load on the system is reduced.
Contiguous Memory Allocation
- In contiguous memory allocation, each process is contained in a single contiguous section of memory.
- The main memory is divided into two partitions: one for the resident operating system and one for the user processes.
- The operating system is usually placed in low memory, and the interrupt vector is also located in low memory.
Memory Mapping and Protection
- The relocation-register scheme helps the operating system by providing an effective way to allow the operating system's size to change dynamically.
- The operating system's size may change dynamically due to the addition or removal of code and buffer space for device drivers.
Memory Allocation Strategies
- Simplest method: Divide memory into several fixed-sized partitions, each of which may contain exactly one process.
- Variable partition scheme: The operating system keeps a table indicating which parts of memory are available and which are occupied.
- Strategies for selecting a free hole: first fit, best fit, and worst fit.
Fragmentation
- External fragmentation: The problem of having enough total memory space to satisfy a request but the available spaces are not contiguous.
- Internal fragmentation: The problem of having a block of memory allocated to a process that is larger than the requested memory.
- Compaction: A solution to external fragmentation that involves shuffling the memory contents to place all free memory together in one large block.
Paging
- Paging is a method of memory management that allows the physical address space of a process to be noncontiguous.
- Physical memory is divided into fixed-sized blocks called frames, and logical memory is divided into blocks of the same size called pages.
- The operating system keeps track of all free frames and sets up a page table to translate logical addresses to physical addresses.
Address Translation Scheme
- The address generated by the CPU is divided into a page number and a page offset.
- The page number is used as an index into a page table, which contains the base address of each page in physical memory.
- The page offset is combined with the base address to define the physical memory address.
Internal Fragmentation
-
Internal fragmentation occurs when a process is allocated a frame that is larger than the requested memory.
-
The average internal fragmentation is one-half page per process.### Operating System Memory Management
-
The operating system must be aware of user processes operating in user space and map logical addresses to produce physical addresses.
Memory Management - Paging
- Paging increases context-switch time because the operating system maintains a copy of the page table for each process.
- The page table is used to translate logical addresses to physical addresses.
- The hardware implementation of the page table can be done in several ways, such as using dedicated registers or a page table base register (PTBR).
Page Table Implementation
- The page table can be stored in fast registers or in main memory, with a PTBR pointing to the page table.
- The page table can be very large, and the use of fast registers is not feasible for large page tables.
Translation Look-aside Buffer (TLB)
- The TLB is a small, fast, associative memory cache that stores recently used page-table entries.
- Each TLB entry consists of a key (or tag) and a value.
- When a logical address is generated, its page number is presented to the TLB, and if the page number is found, its frame number is immediately available.
- The TLB is used to speed up the page-table lookup process.
TLB Usage
- If the page number is found in the TLB, the frame number is used to access memory.
- If the page number is not in the TLB (a TLB miss), a memory reference to the page table must be made.
- When the frame number is obtained, the page number and frame number are added to the TLB.
Memory Protection
- Memory protection in a paged environment is accomplished by protection bits associated with each frame.
- The protection bits are kept in the page table and are checked to verify that no writes are being made to a read-only page.
- An attempt to write to a read-only page causes a hardware trap to the operating system.
Valid-Invalid Bit
- A valid-invalid bit is attached to each entry in the page table to indicate whether the associated page is in the process's logical address space.
- Illegal addresses are trapped using the valid-invalid bit.
Shared Pages
- An advantage of paging is the possibility of sharing common code.
- Shared code is non-self-modifying code that can be executed by multiple processes at the same time.
- The operating system should enforce the read-only nature of shared code.
Hierarchical Paging
- Hierarchical paging is a solution to the problem of large page tables.
- The page table is divided into smaller pieces, using a two-level paging algorithm.
- The page number is further divided into a page number and a page offset.
Hashed Page Tables
- Hashed page tables are used to handle address spaces larger than 32 bits.
- The virtual page number is hashed into the hash table.
- Each entry in the hash table contains a linked list of elements that hash to the same location.
Inverted Page Tables
- Inverted page tables have one entry for each real page (or frame) of memory.
- Each entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns the page.
- The inverted page table requires an address-space identifier to be stored in each entry.
Inverted Page Table Operation
- When a memory reference occurs, the virtual address is presented to the memory subsystem.
- The inverted page table is searched for a match, and if found, the physical address is generated.
- If no match is found, an illegal address access has been attempted.
Drawbacks of Inverted Page Tables
- Inverted page tables have difficulty implementing shared memory.
- The standard method of implementing shared memory cannot be used with inverted page tables.
- A simple technique for addressing this issue is to allow the page table to contain only one mapping of a virtual address to the shared physical address.
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.
Related Documents
Description
This quiz assesses knowledge of operating systems concepts, covering topics from textbooks by Abraham Silberschatz and William Stallings. It's ideal for computer science students.