Podcast
Questions and Answers
What distinguishes a 'reusable' resource category from a 'consumable' one in operating systems?
What distinguishes a 'reusable' resource category from a 'consumable' one in operating systems?
- Reusable resources are permanently blocked until manually released; consumable resources release automatically.
- Consumable resources are of a fixed quantity, whereas reusable resources can be dynamically created.
- Reusable resources can be used by multiple processes simultaneously, unlike consumable resources.
- Consumable resources are depleted after use, while reusable resources are not. (correct)
In the context of concurrency, what condition is necessary for a deadlock to occur when processes are attempting to receive messages from each other?
In the context of concurrency, what condition is necessary for a deadlock to occur when processes are attempting to receive messages from each other?
- At least one process must use a timeout for the receive operation.
- The receive operation must be blocking. (correct)
- Processes must use shared memory instead of message passing.
- The send operation must be non-blocking.
What is a primary disadvantage of deadlock prevention via 'resource ordering'?
What is a primary disadvantage of deadlock prevention via 'resource ordering'?
- It undercommits resources, leading to reduced efficiency.
- It disallows incremental resource requests. (correct)
- It requires run-time computation, increasing overhead.
- It necessitates preemption of resources, causing instability.
Which strategy for handling deadlocks is considered the most conservative, potentially leading to underutilization of resources?
Which strategy for handling deadlocks is considered the most conservative, potentially leading to underutilization of resources?
What is a key advantage of deadlock detection over deadlock prevention or avoidance?
What is a key advantage of deadlock detection over deadlock prevention or avoidance?
Which condition is necessary for a deadlock to occur?
Which condition is necessary for a deadlock to occur?
Which of the following approaches is used to address deadlock by dynamically deciding whether a resource allocation request will be granted?
Which of the following approaches is used to address deadlock by dynamically deciding whether a resource allocation request will be granted?
What is a 'safe state' in the context of the Banker's Algorithm for deadlock avoidance?
What is a 'safe state' in the context of the Banker's Algorithm for deadlock avoidance?
What restriction is imposed by deadlock avoidance that might limit its practicality in real systems?
What restriction is imposed by deadlock avoidance that might limit its practicality in real systems?
What is a significant disadvantage of using deadlock detection algorithms?
What is a significant disadvantage of using deadlock detection algorithms?
What is the purpose of a 'recovery strategy' in the context of deadlocks?
What is the purpose of a 'recovery strategy' in the context of deadlocks?
Which Unix concurrency mechanism uses circular buffers to enable communication between two processes following the producer-consumer model?
Which Unix concurrency mechanism uses circular buffers to enable communication between two processes following the producer-consumer model?
Which of the following is a characteristic of shared memory in Unix-like systems?
Which of the following is a characteristic of shared memory in Unix-like systems?
What capability do semaphores provide beyond simpler locking mechanisms?
What capability do semaphores provide beyond simpler locking mechanisms?
How does a process typically respond to a signal in a Unix-like environment?
How does a process typically respond to a signal in a Unix-like environment?
What is the primary purpose of a spinlock in the Linux kernel?
What is the primary purpose of a spinlock in the Linux kernel?
What is a potential disadvantage of using spinlocks?
What is a potential disadvantage of using spinlocks?
Which of the following describes an 'atomic operation'?
Which of the following describes an 'atomic operation'?
What is the purpose of mutex locks in the Solaris thread synchronization primitives?
What is the purpose of mutex locks in the Solaris thread synchronization primitives?
Which of the following is a key aspect of condition variables?
Which of the following is a key aspect of condition variables?
What is the main advantage of using reader-writer locks over exclusive locks (mutexes)?
What is the main advantage of using reader-writer locks over exclusive locks (mutexes)?
What is a characteristic of critical sections in Windows?
What is a characteristic of critical sections in Windows?
What property defines 'lock-free' synchronization?
What property defines 'lock-free' synchronization?
Which memory management requirement is satisfied by ensuring that processes have authorized access to memory locations?
Which memory management requirement is satisfied by ensuring that processes have authorized access to memory locations?
What problem does 'overlaying' attempt to solve in memory management?
What problem does 'overlaying' attempt to solve in memory management?
What is the main disadvantage of 'fixed partitioning' in memory management?
What is the main disadvantage of 'fixed partitioning' in memory management?
What is the primary advantage of dynamic partitioning?
What is the primary advantage of dynamic partitioning?
What is the purpose of 'compaction' in dynamic partitioning?
What is the purpose of 'compaction' in dynamic partitioning?
What is a key characteristic of 'simple paging'?
What is a key characteristic of 'simple paging'?
In memory management, what is the distinction between 'logical' and 'physical' addresses?
In memory management, what is the distinction between 'logical' and 'physical' addresses?
What is the role of a 'page table' in memory management?
What is the role of a 'page table' in memory management?
What is a 'buffer overflow' attack related to?
What is a 'buffer overflow' attack related to?
In virtual memory, what is the key benefit of not requiring all pages of a process to be in main memory during execution?
In virtual memory, what is the key benefit of not requiring all pages of a process to be in main memory during execution?
What is 'thrashing' in the context of virtual memory?
What is 'thrashing' in the context of virtual memory?
What is the role of the Translation Lookaside Buffer (TLB)?
What is the role of the Translation Lookaside Buffer (TLB)?
Which of the following is a key factor influencing policies for virtual memory?
Which of the following is a key factor influencing policies for virtual memory?
Flashcards
Deadlock
Deadlock
The permanent blocking of a set of processes that compete for system resources or communicate.
Reusable Resource
Reusable Resource
A resource that can be safely used by only one process at a time and is not depleted by that use.
Consumable Resource
Consumable Resource
A resource that can be created and destroyed.
Deadlock Prevention: Conservative
Deadlock Prevention: Conservative
Signup and view all the flashcards
Deadlock Detection
Deadlock Detection
Signup and view all the flashcards
Mutual Exclusion
Mutual Exclusion
Signup and view all the flashcards
Hold and Wait
Hold and Wait
Signup and view all the flashcards
Circular Wait
Circular Wait
Signup and view all the flashcards
Prevent Deadlock
Prevent Deadlock
Signup and view all the flashcards
Avoid Deadlock
Avoid Deadlock
Signup and view all the flashcards
Detect Deadlock
Detect Deadlock
Signup and view all the flashcards
Deadlock Prevention Strategy
Deadlock Prevention Strategy
Signup and view all the flashcards
Deadlock Avoidance
Deadlock Avoidance
Signup and view all the flashcards
Resource allocation denial
Resource allocation denial
Signup and view all the flashcards
State of the system
State of the system
Signup and view all the flashcards
Process initiation denial
Process initiation denial
Signup and view all the flashcards
Frame
Frame
Signup and view all the flashcards
Page
Page
Signup and view all the flashcards
Segment
Segment
Signup and view all the flashcards
Relocation Necessity
Relocation Necessity
Signup and view all the flashcards
Overlaying
Overlaying
Signup and view all the flashcards
Memory partitioning
Memory partitioning
Signup and view all the flashcards
Fixed partitioning
Fixed partitioning
Signup and view all the flashcards
Dynamic partitioning
Dynamic partitioning
Signup and view all the flashcards
Simple paging
Simple paging
Signup and view all the flashcards
Simple segmentation
Simple segmentation
Signup and view all the flashcards
Buddy System
Buddy System
Signup and view all the flashcards
Virtual Address
Virtual Address
Signup and view all the flashcards
Real Address
Real Address
Signup and view all the flashcards
Utility of buffering
Utility of buffering
Signup and view all the flashcards
Priority (PRI)
Priority (PRI)
Signup and view all the flashcards
Shortest service time (SSTF)
Shortest service time (SSTF)
Signup and view all the flashcards
SCAN Disk Scheduling
SCAN Disk Scheduling
Signup and view all the flashcards
C-SCAN (Circular Scan)
C-SCAN (Circular Scan)
Signup and view all the flashcards
Windows volume shadow copies
Windows volume shadow copies
Signup and view all the flashcards
Study Notes
Concurrency: Deadlock and Starvation
- Deadlock is the permanent blocking of a set of processes that compete for system resources or communicate with each other.
- A set of processes is deadlocked when each process is blocked, awaiting an event that can only be triggered by another blocked process in the set.
- Deadlock is permanent and has no efficient solution.
Resource Categories
- Reusable resources can be safely used by only one process at a time and are not depleted by that use.
- Examples include processors, I/O channels, main and secondary memory, devices, and data structures like files, databases, and semaphores.
- Consumable resources can be created (produced) and destroyed (consumed).
- Includes interrupts, signals, messages, and information, as well as I/O buffers.
Consumable Resources Deadlock
- Consider two processes, P1 and P2, attempting to receive messages from each other.
- P1: Receive(P2), Send(P2, M1)
- P2: Receive(P1), Send(P1, M2)
- Deadlock occurs if the receive operation is blocking.
Deadlock Detection, Prevention, and Avoidance
Prevention
- Conservative approach that under-commits resources and requests all resources at once.
Prevention - Advantages
- Works well for processes performing a single burst of activity.
- No preemption necessary.
Prevention - Disadvantages
- Future resource requirements must be known by processes.
Preemption
- Convenient when applied to resources whose state can be saved and restored easily.
Preemption - Disadvantage
- Preempts resources more often than necessary.
Resource Ordering
- Feasible to enforce via compile-time checks.
- Needs no run-time computation since the problem is solved in system design.
Resource Ordering - Disadvantage
- Disallows incremental resource requests.
Avoidance
- A hybrid approach between detection and prevention.
- Manipulates resource allocation to find at least one safe path.
Avoidance - Advantages
- No preemption necessary.
Avoidance - Disadvantages
- Future resource requirements must be known by the OS.
- Processes can be blocked for long periods.
Detection
- A very liberal approach where requested resources are granted when possible.
- Invoked periodically to test for deadlock.
Detection - Advantages
- Never delays process initiation.
- Facilitates online handling.
Detection - Disadvantages
- Inherent preemption losses.
Conditions for Deadlock
Mutual Exclusion
- Only one process may use a resource at a time.
Hold and Wait
- A process may hold allocated resources while awaiting assignment of others.
No Preemption
- No resource can be forcibly removed from a process holding it.
Circular Wait
- A closed chain of processes exists, where each process holds at least one resource needed by the next process in the chain.
Dealing with Deadlock
- Three general approaches exist prevent, avoid or detect deadlock.
- Prevent deadlock by adopting a policy that eliminates one of the necessary conditions.
- Avoid deadlock by making dynamic choices based on the current state of resource allocation.
- Detect deadlock by attempting to detect its presence and take action to recover.
Deadlock Prevention Strategy
- Design a system to exclude the possibility of deadlock.
- Involves indirect methods to prevent the occurrence of necessary conditions and direct methods to prevent circular wait.
Deadlock Condition Prevention
- Mutual exclusion must be supported by the OS if access to a resource requires it.
- Prevent "hold and wait" by requiring processes to request all resources at once and blocking them until all requests can be granted simultaneously.
- For "no preemption", if a process holding resources is denied a further request, it must release its original resources and request them again; the OS may also preempt the second process.
- Prevent "circular wait" by defining a linear ordering of resource types.
Deadlock Avoidance
- A decision is made dynamically whether a resource allocation request, if granted, could potentially lead to a deadlock.
- Requires knowledge of future process requests.
Approaches to Deadlock Avoidance
Resource Allocation Denial
- Do not grant an incremental resource request to a process if it might lead to deadlock.
- The Banker's algorithm determines the 'safe state', where a sequence of resource allocations exists that does not result in a deadlock.
Process Initiation Denial
- Do not start a process if its demands might lead to deadlock.
- Determination of Safe State is based on the current allocations made to the processes in the system.
Deadlock Avoidance Advantages
- Does not require preemption or rollback of processes like deadlock detection.
- Less restrictive than deadlock prevention.
Deadlock Avoidance Restrictions
- The maximum resource requirement for each process must be stated in advance.
- Processes must be independent with no synchronization requirements.
- The fixed number of resources to allocate cannot be exceeded.
- No process may exit while holding resources.
Deadlock Strategies
- Prevention strategies are very conservative.
- They limit access to resources by imposing restrictions on processes.
- Detection strategies are the opposite allowing requests to be granted whenever possible.
Deadline Detection Algorithms
- A check for deadlock can be made as frequently as each resource request or less frequently, depending on the likelihood of deadlock.
Deadline Detection Algorithms - Advantages
- Can lead to early detection
- The algorithm is relatively simple.
Deadline Detection Algorithms - Disadvantage
- Frequent checks consume considerable processor time.
Recovery Strategies
- Can be achieved through several ways abort all deadlocked processes, roll back each deadlocked process to a previously defined checkpoint, successively abort deadlocked processes until the deadlock no longer exists, or successively preempt resources until the deadlock no longer exists.
Summary of Deadlock
Prevention
- Conservative, under-commits resources.
- Requesting all resources at once.
- Works well for processes that perform a single burst of activity.
- No preemption necessary therefore Inefficient
- Causes Delay in process intuition
- Relies on Future resource requirements known by processes to function
Preemption
- Conventions when applied to resources whose state can be saved and restored easily.
- More resources required more often than necessary
Resources Ordering
- Feasible to enforce via compile-time checks.
- Needs no run-time computation since the problem is solved in system design.
- Disallows incremental resource requests.
Avoidance
- Midway between that of detection and prevention.
- Manipulate to find at least one safe path.
- No preemption necessary.
- Future resource requirements must be known by the OS.
- Processes can be blocked for long periods.
Detection
- Very liberal; requested resources are granted when possible.
- Invoke periodically to test for deadlock.
- Never delays process initiation.
- Facilitates online handling but Inherent preemption losses
Unix Concurrency Mechanisms
Pipes
- Circular buffers allow two processes to communicate on the producer-consumer model.
- First in, first out queue written by one process and read by another.
- Named and unnamed types.
Messages
- A block of bytes with an accompanying type.
- Unix provides
msgsnd
andmsgrcv
system calls for processes to engage in message passing. - Associated with each process is a message queue, functioning like a mailbox.
Shared Memory
- Fastest form of interprocess communication.
- A common block of virtual memory shared by multiple processes, with read-only or read-write permissions.
Semaphores
- Generalization of the
semWait
andsemSignal
primitives. - No other process may access the semaphore until all operations have completed.
Semaphores - Consists of
- The current value of the semaphore. ID of the last process to operate on the semaphore.
- Number of processes waiting for the semaphore value to be greater than its current value.
- Number of processes waiting for the semaphore value to be 0.
Signals
- A software mechanism that informs a process of the occurrence of asynchronous events.
- Similar to a hardware interrupt, but does not employ priorities.
- A signal is delivered by updating a field in the process table for the rate process to which the signal is being sent.
- A process may respond to a signal by performing some default action, executing a signal handler function or ignoring the signal.
Linux Kernel Concurrency Mechanism
Barriers
- Enforce the order in which instructions are executed.
Spinlocks
- Most common technique for protecting a critical section in Linux.
- Can only be acquired by one thread at a time, with other threads spinning until the lock is acquired.
- Effective when the wait time for acquiring a lock is expected to be short.
- Locked-out threads continue to execute in a busy-waiting mode.
Semaphores
- User-level semaphore interface and internally implemented as functions within the kernel.
Semaphores - Types
- Includes binary, counting, and reader-writer semaphores.
Atomic Operations
- Execute without interruption and are the simplest approach to kernel synchronization.
Atomic Operations - Types
- Integer operations (for implementing counters).
- Bitmap operations (operating on bits at an arbitrary memory location).
Synchronization Primitives
- Solaris supports 4 thread synchronization primitives in addition to Unix SVR4 concurrency mechanisms.
Mutual Exclusion Locks (Mutex)
- Used to ensure only one thread at a time can access the resource protected by the mutex.
- The thread that locks the mutex must be the one that unlocks it via the
mutex_enter
primitive.
Semaphores
Solaris provides classic counting semaphores with the primitives sema_p()
which decrements the semaphore potentially blocking the thread and sema_v()
which increments the semaphore potentially unblocking a waiting thread.
Threads - Includes`sema_tryp()
- Which decrements the semaphore if blocking is not required
Condition Variables
- Used to wait until a particular condition is true.
- Must be used in conjunction with a mutex lock.
Readers / Writers Locks
- Allow multiple threads to have simultaneous read-only access or a single thread to have exclusive write access.
- When a lock is acquired for writing, it takes on the status of "write lock".
Windows 7 Concurrency Mechanisms
- Windows provides synchronization among threads as part of its architecture.
Windows 7 Concurrency Mechanisms - Includes
- Executive dispatcher objects, user-mode critical sections, slim reader-writer locks, condition variables, and lock-free operations.
Wait Functions
- Allow a thread to block its own execution until specified criteria have been met.
- The type of wait function determines the set of criteria used.
Critical Sections
- Functionally similar to mutexes but usable only by threads of a single process.
Slim Read-Writer Locks
- Added in Windows Vista as a user-mode reader-writer lock.
Condition Variables
- Windows has condition variables that must be declared and initialized, and used with critical sections or SRW locks.
Lock-Free Synchronization
- Windows relies heavily on interlocked operations for synchronization, using hardware facilities to guarantee atomic memory operations.
- Synchronizing without taking a software lock.
Summary
- Permanent blocking of processes competing for resources.
- Blockage requires the OS action unless resolved
- Detection, prevention, and avoidance are strategies.
- May involve reusable or consumable resources.
- Consumable resources are destroyed when acquired.
- Reusable resources are not depleted or destroyed by use.
Chapter 7 Memory Management
Terminology
Frame
- A fixed-length block of main memory.
Page
- A fixed-length block of data residing in secondary memory (e.g., disk), temporarily copied into a frame of main memory.
Segment
- A variable-length block of data residing in secondary memory. An entire segment may be copied into available main memory, or divided into pages individually copied into main memory.
Memory Management Requirements
- Intended to satisfy requirements, which include:
Relocation
- Programmers do not know in advance which programs will be resident in main memory during execution.
- Active processes need to be swapped in and out to maximize processor utilization.
- Specifying a process must be placed in the same memory region when swapped back would be limiting.
Protection
- Processes need permission to reference memory locations for reading or writing.
- Location of a program in main memory is unpredictable.
- Memory references must be checked at run time.
- Mechanisms supporting relocation also support protection.
Sharing
- It is advantageous to allow each process to access the same program copy rather than have separate copies.
- Memory management must allow access to shared areas without compromising protection.
- Mechanisms supporting relocation support sharing capabilities.
Logical Organization
Memory is organized linearly and Programs written in modules can be independently written and compiled with varying degrees of protection modules (read-only, execute-only) can be shared between users.
Physical Organization
- Program management should not be the responsibility of the programmer. Memory available for a program plus might its data be insufficient unless the programmer knows how much space will be available.
Memory Partitioning
- Memory management brings processes into main memory for execution by the processor, and may involve virtual memorybased on segmentation and paging.
Partitioning
- Can be achieved through various variations which have now become obsolete.
- Creation of one or more regions on secondary storage, each region can be managed separately Does not involve virtual memory
Memory Management Techniques
Fixed Partitioning
- Main memory is dived into static partitions at system generation time. A process may be loaded into a partition of equal or greater size which is easy to implement but potentially innefficient.
Dynamic Partitioning
- Partitions are created dynamically, so each process is loaded into a partition of exactlythe same size is efficient for main memory but Inefficient use of processor due to the need for compaction to counter external fragmentation.
Simple Paging
- Main memory is divided into a number of equal size frames. Each process is divided into a a number of equal size pages of the same length as frames. A process is loaded by loading all of its pages into available, not necessarily contiguous frames which eliminates external fragmentation but has a samll inout of internal fragmentation.
Simple Segmentation
- Each process is divided into a number of segments. A process is loaded by loading all of its segments into dynamic partitions that need not be contiguous which eliminates internal fragmentation and improves memory utilization but external fragmentation still occurs.
Virtual Memory Paging
- Like simple paging, except it that it is not necessary to load all the pages of a process which eliminates fragmentation and has ahigher degree of Multiprogramming , but with Overhead of complex memory management
Virtual Memory Segmentation
- As with simple segmentation except that it is not necessary to load all of the segments of a process. Nonresident segments that are needed are brought in later automatically Which supports protection and sharing while eliminating internal fragmentation and has higher degree of Multiprogramming large virtual address space but with the overhead if complex memory management.
Fixed Partitioning - Equal Size Partitions
- Any process whose size is less than or equal to the partition size can be loaded into an available partition.
- The operating system can swap out a process if all partitions are full and no process is in the ready or running state.
- Limitations imposed Program may be too big to fit in a partition unless Programs are designed with the use of overlays, also ininefficient as main memory utilization Any program, regardless of size, occupies an entire partition.
Internal Fragmentation
Unequal Size Partitions
- Wasted space due to the block of data loaded being smaller than the partition
- Lessens but Programs are up to 16M can be accommodated without overlays although allocations smaller than 8M cannot take less space.
Memory Assignment
The number of partitions specified at system generation time limits the number of active processes in the system unless using Dynamic partitioning while Small jobs will not utilize partition space efficiently if using fixed partitioning.
Dynamic Partitioning - Placement Algorithms
Placement Algorithms - Best Fit
- Chooses the block that is closest in size to the request
Placement Algorithms - First fit
- Begins to scan memory from the beginning and chooses the first available block that is large enough
Next fit
- Begins to scan memory from the location of the last placement and chooses the next available block that is large enough
Buddy System
- Comprised of fixed and dynamic partitioning schemes
- Space available for allocation is treated as a single block and is divided into fixed size blocks
- Whenever a process requests memory, the system finds the smallest available block that can accommodate the requested memory size
- Memory blocks are available for size 2^k words -L <= K <= U, where 2^L = smallest size block that is allocated -S^U = largest block that is allocated; generally is the size of the entire memory available for allocation
Addresses
- Logical: Reference to a memory location independent of the current assignment of data to memory
- Relative: Address is expressed as a location relative to some known point
- Physical or absolute: Actual location in main memory.
Paging
- Memory is split into equal fixed-size chunks that are relatively small
- A Process is also divided into small fixed-size chunks of the same size
- Pages: Chunks of a process
- Frames: Available chunks of memory
Page table
- Maintained by OS for each process which contains the frame location for each page in the process
- The Processor must know how to access for the current process so the page table can be Used by processor to produce a physical address
Segmentation
- Requires that A program can be subdivided into segments that vary in length but There is a maximum length
- Addressing consists of 2 parts
- Segment number and An offset that are Similar to dynamic partitioning and Eliminates internal fragmentation
Security Issues
- If a portion declares that a portion of memory may be shared by other designated processes then the security service of the OS must ensure that only the designated processes have access, unless if a process has not declared a portion of its memory shareable, then no other process should have access to the contents of that portion of memory
Buffer Overflow Attacks
- Presents a Security threat related to memory management
- Is Also known as a buffer overrun and can occur when a process attempts to store data beyond the limits of a fixed-sized buffer
- this Attack is one of the most prevalent and dangerous types of security attacks so requires a defense.
Paging - Defending against buffer overflows
- Involves Prevention, Detecting and aborting and countermeasures
- Aim to harden programs to resist attacks in new programs
- Run-time defenses • Aim to detect and abort attacks in existing programs
Summary
- Memory management,
- Memory must be treated as a resource to be allocated to and shared among a number of active processes efficiently desiriable to maintain as many processes as possible toFree programmers from size restriction in program development
- Techniques Paging - small fixed sized pages and Segmentation - pieces of varying size are Basic tool are paging and segmentation (possible to combine)
Chapter 8 Virtual Memory
- Uses virtual memory as well Hardware and control structures
- Must include 2 characteristics fundamental to memory management
Memory Characteristics
- All memory references are logical addresses that are dynamically translated into physical addresses at run time
- A process may be broken up into a number of pieces that don't need to be continuously located in main memory during execution
- If these 2 characteristics are present, it is not necessary that all of the pages or segments of a process be in main memory during execution
Virtual Memory
- Uses of the terminology that include: a storage allocation scheme in which secondary memory can be addressed as though it were a part of main memory. and where the addresses a program may use to reference memory are distinguished from the addresses the memory system uses to identify physical storage sites
- Program generated addresses are translated automatically to the corresponding machine addresses.
Virtual Address
- The address assigned to a location in virtual memory to allow that location to be accessed as though it were part of main memory
Virtual Address Space
- The virtual storage assigned to a process
Address Space
- The range of memory addresses available to a process
Real Address
- The address of a storage location in main memory
Execution of a Process
- Requires Bringing OS into main memory a few pieces of the program with a Relevant Set for Portion of process that is in main memory
- Then when an address is needed that is not in main memory an interrupt is generated which Causes OS to place the process in a blocking state the Piece of process that contains the logical address is brought into main memory and the OS issues a disk I/O read request
- Then another process is dispatched to run while the disk I/O takes place before, an interrupt is issued when disk I/O is complete, which causes the OS to place the affected process in the ready state
Implications
This then has implications for the CPU
- More processes may be maintained in main memory and Only load in some of the pieces of each process
- so With so many processes in main memory, it is very likely a process will be in the ready state at any particular time
- In conclusion its possible for a process may be larger than all of main memory
Real and Virtual Memory
Real memory
- Memory on disk
Virtual Memory
- Main memory, the actual RAM
- Allows for effective Multiprogramming and relieves the user of tight constraints of main memory
Thrashing
- A state in which the system spends most of its time swapping process pieces rather than executing instructions
- To avoid this, the OS tries to guess based on recent history, which pieces are least likely to be used in the near future
Principle Of Locality
- Programs tend to create some local clusters and as such Only a few pieces of a process will be needed over a short period of time, with the therefore purpose of making intelligent guesses about which pieces will be needed in the future.
Paging Behaviour
- During the lifetime of the process, references are confined to a subset of pages Support needed for virtual memory For virtual memory to be practical and effective
Paging - Implications
- Hardware Implications Hardware must support paging and segmentation for a Virtual Memory System
- System Implications: OS must include software for managing the movement of pages and/or segments between secondary memory and main memory and must have the Ability to Avoid Thrashing if there is Support Needed for virtual memory
Paging
- The term virtual memory is associated with systems that employ paging and using this withAchieve virtual memory was first reported for the Atlas computer
- But each process Each process has its own page table requires some table maintenance which Means Each page table entry contains the frame number of the corresponding page in main memory
Inverted Page Table Page table mainatanece
- Page number portion of a virtual address is mapped into a hash value where the Hash value points to inverted page table to provide a and Fixed proportion of real memory for the tables regardless of the number of processes or virtual pages supported
Table Organisation
- The table must be calls inverted due to its structure and it indexes page table entries by frame number rather than by virtual page number
Inverted Page Table - Organisation Table continued
Each entry in the page table typically Includes:
- Page number for Table location
- Process identifier for Process location Includes
- flags and protection and locking and
- theChain pointer used to find The index value of the next entry in the chain
Translation Lookaside Buffer (TLB)
- Each virtual memory reference can cause 2 physical memory accesses one to fetch the page table entry and One to fetch the data so we can
- Overcome the effect for doubling the memory access time by with most virtual memory schemes, using a special high-speed cache called a translation lookaside buffer for what is Known Associative mapping to find The TLB can contain some of the page table entries but require that the processor is equipped with hadware that allows.
• Page Size – The smaller the page size, the lesser the amount if internal fragmentation – However, more pages are required per process – More pages per process means larger page table – For large programs in a heavily multiprogrammed environment some portion of the page tables of active processes must be in virtual memory instead of main memory – The physical characteristics of most secondary-memory devices favor a larger page size for more efficient block transfer of data
• Paging Size – The design issue of page size is related to the size of physical main memory and program size – Main memory is getting larger and address space used by application sis also growing – Most obvious on personal computers where applications are becoming increasingly complex – Contemporary programming techniques used in large programs tend to decrease the locality of references within a process
• Segmentation – Segmentation allows the programmer to view memory as consisting of multiple address spaces or segments – Advantages • Simplifies handling of growing data structures • Allows programs to be altered and recompiled independently • Lends itself to sharing data among processes • Lends itself to protection
• Segment Organization – Each segment table entry contains the starting address of the corresponding segment in main memory and the length of the segment – A bit is needed to determine if the segment is already in main memory – Another bit is needed to determine if the segment has been modified since it was loaded in main memory
• Combined Paging and Segmentation – In a combined paging / segmentation system a user’s address space is broken up into a number of segments. Each segment is broken up into a number of fixed-sized pages which are equal in length to a main memory frame • Segmentation is visible to the programmer • Paging is transparent to the programmer
• Protection and Sharing – Segmentation lends itself to the implementation of protection and sharing policies – each entry has a base address and length so inadvertent memory access can be controlled – Sharing chain be achieved by segments referencing multiple processes
• OS Software – The design of the memory management portion of a n OS depends on 3 fundamental areas of choice • Whether or not to use virtual memory techniques • The use of paging or segmentation or both • The algorithms employed for various aspects of memory management
• Policies for Virtual Memory Key issue: Performance for the minimise of pagefaults • Fetch policy Demand paging and Preparing • Placement policy • Replacement policy Basic algorithms • Optimal • Least recently used (LRU) • First in first out (FIFO) • clock
• Page Buffering • Resident Set Management – Resident set size • Fixed • Variable – Replacement score • Global • Local • Cleaning Policy – Demand – Precleaning
• Load Control for Multiprogramming Level • Fetch policy Determines when a page should be brought into memory – Demand paging and Preparing
• Fetch Policy- Demand Paging – Only brings pages into main memory when a reference is made to a location on the page – Many page faults when process is first started – Principle of locality suggests that as more and more pages are brought in, most future references will be to pages that have recently been brought in, and page faults should drop to a very low level
• Prepaging or Pages other than the one demanded by a page fault are brought in – Exploits the characteristics of most secondary memory devices – If pages of a process are stored continuously in secondary memory it is more efficient to bring in a number of pages at one time – Ineffective if extra pages are not referenced – Should not be confused with “swapping"
• Placement Policy – Determines where in real memory a process piece is to reside – Important design issue in a segmentation system – Paging or combined paging with segmentation placing is irrelevant because hardware performs functions with equal efficiency – For NUMA systems an automatic placement strategy is desirable
• Replacement Policy forSelection of page to Replace what isn in Memory – Deals with the selection of a page in main memory to be replaced when a new page must be brought in • Objective is that the page that is removed be the page least likely to be referenced in the near future – The more elaborate the replacement policy, the greater the hardware and software overhead to implement it will be • Frame Locking – When a frame is locked the page currently stored in that frame may not be replaced • Kernel of the OS as well as key control structures are held in locked frames • I/O buffers and time-critical areas may be locked into main memory frames
• Algorithms used for the Selection of a page to replace • Most used algorithm:Optimal • Least recently used (LRU) • First in first out (FIFO) • Clock
• Optimal Policy – Selects the page for which the time to the next reference is the longest – Produces three page faults after the frame allocation has been filled
• Least Recently Used (LRU) – Replaces the page that has not been referenced for the longest time – By the principle of locality, this should be the page least likely to be referenced in the near future – Difficult to implement • One approach is to tag each page with the time of last reference • This requires a great deal of overhead
• First In First Out (FIFO) – Treats page frames allocated to a process as a circular buffer – Pages are removed in round-robin style • Simple replacement policy to implement – Page that has been in memory the longest is replaced
• Clock Policy – Requires the association of an additional bit with each frame • Referred to as the use bit – When a page is first loaded in memory or referenced, the use bit is set to 1 – The set of frames is considered to be a circular buffer – Any frame with a use bit of 1 is passed over by the algorithm – Page frames visualized as laid out in a circle
• Page Buffering Provides Improves paging performance and allows the use of a simpler page replacement policy – A replaced page is not los, but rather assigned to one of 2 lists: • Free page list • List of page frames available for reading in pages • Modified page list • Pages are written out in clusters
• Replacement Policy and Cache Size – With large caches, replacement of pages can have a performance impact • If the page frame selected for replacement is in the cache, that cache block is lost as well as the page that it holds – In systems using page buffering, cache performance can be improve with a policy for page placement in the page buffer – Most OS place pages by selecting an arbitrary page frame from the page buffer
• Resident Set Management – The OS must decide how many pages to bring into main memory • The smaller the amount of memory allocated to each process, the more processes can reside in memory • Smaller number of pages loaded increases page faults – Beyond a certain size, further allocations of pages will not effect the page fault rate
• Resident Set Size – Fixed allocation • Gives a process a fixed number of frames in main memory within which to execute • When a page fault occurs, one of the pages of that process must be replaced – Variable allocation • Allows the number of page frames allocated to a process to be varied over the lifetime of the process
• Replacement Scope – The scope of a replacement strategy can be categorized as global or local – Both types are activated by a page fault when there are no free page frames – Local • Chooses only among the resident pages of the process that generated the page fault – Global • Considers all unlocked pages in main memory
• Fixed Allocation, Local Scope – Necessary to decide ahead of time the amount of allocation to give a process – If allocation is too small, there will be a high page fault rate – If allocation is too large, there will be too few programs in main memory • Increased processor idle time • increased time spent in swapping
• Variable Allocation Global Scope – Easiest to implement – Adopted in a number of OS – OS maintains a list of free frames – Free frame is added to resident set of process when a page fault occurs – If no frames are available the OS must choose a page currently in memory – One way to counter potential problems is to use page buffering
• Variable Allocation Local Scope – When a new process is loaded into main memory, allocate to it a certain number of o page frames as its resident set – When a page fault occurs, select the page to replace from among the resident set of the process that suffers the fault – Reevaluate the allocation provided to the process and increase or decrease it to improve overall performance
• Variable Allocation Local Scope Continued – Decision to increase or decrease a resident set size is based on the assessment of the likely future demands of active processes – Key elements • Criteria used to determine resident set size • The timing of changes
• Page Fault Frequency (PFF) – Requires a use bit to be associated with each page in memory – Bit is set to 1 when that page is accessed – When a page fault occurs, the OS notes the virtual time since the last page fault for that process – Does not perform well during the transient periods when there is a shift to a new locality
• Variable Interval Sampled Working Set (VSWS) – Evaluates the working set of a process at sampling instances based on elapsed virtual time – Driven by 3 parameters: • The minimum duration of the sampling interval • The maximum duration of the sampling interval • The number of page faults that are allowed to occur between sampling instances
• Cleaning Policy – Concerned with determining when a modified page should be written out to secondary memory – Demand cleaning • A page is written out to secondary memory only when it has been selected for replacement – Precleaning • Allows the writing of pages in batches
• Load Control – Determines the number of processes that will be resident in main memory • Multiprogramming level – Critical in effective memory management – Too few processes, many occasions when all processes will be blocked and much time will be spent in swapping – Too many processes will lead to thrashing
• Process Suspension – If the degree of Multiprogramming is to be reduced, one or more of the currently resident processes must be swapped out – 6 possibilities exists • Lowest priority process • Faulting process • Last process activated • Recess with the smallest resident set • Largest process • Process with the largest remaining execution window
• Unix – Intended to be machine independent so its memory management schemes will vary – Early Unix: variable partitioning with no virtual memory scheme – Current implementations of UNIX and Solaris make use of paging and kernel memory allocator
• Paging System and Kernel Memory Allocator – Paging system • Provides a virtual memory capability that allocates page frames in main memory to processes • Allocates page frames to disk block buffers – Kernel memory allocator • Allocates memory for the kernel
• Page Replacement – The page frame data table is used for page replacement – Pointers are used to create lists within the table • All available frames are linked together in a list of free frames available for bringing in pages • When the number of available frames drops below a certain threshold, the kernel will steal a number of frames to compensate
• Kernel Memory Allocator – The kernel generates and destroys small tables and buffers frequently during the course of execution, each of which requires dynamic memory allocation – Most of these blocks are significantly smaller than typical pages (therefore paging would be inefficient) – Allocations and free operations must be made as fast as possible
• Lazy Buddy – Technique adopted for svr4 – UNIX often exhibits steady state behavior in kernel memory demand • I.e. the amount of demand for blocks of a particular size varies slowly in time – defers coalescing until it seems likely that it is needed, and then coalesces as many blocks as possible
• Linux Memory Management – Shares many characteristics with UNIX – Is quite complex – 2 main aspects • Process virtual memory • Kernel memory allocation
• Linux Virtual Memory – 3 level page table structure: • Page directory – Process has a single page directory – Each entry points to one page of the page middle directory – Must be in main memory for an active process • Page middle directory – May span multiple pages – Each entry points to one page in the page table • Page table – May also span multiple pages – Each entry refers to one virtual page of the process
• Linux Page Replacement – Based on the clock algorithm – The use bit is replaced with an 8 bit age variable • Incremented each time the page is accessed – Periodically decrements the age bits • A page with an age of 0 is an “old” page that has not been referenced s some time and is the best candidate for replacement – A form of least frequently used policy
• Kernel Memory Allocation – Kernel memory capability manages physical main memory page frames • Primary function is to allocate and deallocate frames for particular uses – Possible owners of a frame include: • User space processes • Dynamically allocated kernel data • Static kernel code • Page cache – A buddy algorithm is used so that memory for the kernel can be allocated and reallocated in units of one or more pages – Page allocator alone would be inefficient because the kernel requires small short term memory chunks in odd sizes – Slab allocation • Used by Linux to accommodate small chunks
• Windows Memory Management – Virtual memory manager controls how memory is allocated and how paging is performed – Designed to operate over a variety of platforms – Uses page sizes ranging from 4 kbytes to 64 kbytes
• Widows Virtual Address Map – On a 32 bit platforms
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.