Concurrency Module 4 Lecture 1 PDF

Summary

This is a lecture on concurrency in module 4 of an operating system course. It discusses interprocess communication mechanisms such as shared memory and message passing.

Full Transcript

Concurrency Module – 4 Lecture 1 Module 2 Interprocess Communication  Processes executing concurrently in the operating system may be either independent processes or cooper...

Concurrency Module – 4 Lecture 1 Module 2 Interprocess Communication  Processes executing concurrently in the operating system may be either independent processes or cooperating processes.  Reasons for providing an environment that allows process cooperation: Information sharing, Computation speedup and Modularity 04-09-2024 2 Module 2 Interprocess Communication  Cooperating processes require an interprocess communication (IPC) mechanism that will allow them to exchange data.  Two mechanisms: Shared Memory and Message passing 04-09-2024 3 Module 2 Shared Memory and Message passing Figure 3.11 Communications models. (a) Shared memory. 04-09-2024 (b) Message passing 4 Module 2 Shared Memory and Message passing  Message passing is useful for exchanging smaller amounts of data, because no conflicts need be avoided.  Message passing is also easier to implement in a distributed system than shared memory.  Shared memory can be faster than message passing (More System calls are used by message passing) 04-09-2024 5 Module 2 IPC in Shared-Memory Systems  IPC using shared memory requires communicating processes to establish a region of shared memory.  It resides in the address space of the processes  Processes can exchange information by reading and writing data in the shared areas.  The form of the data and the location are determined by these 04-09-2024 processes and are not under the operating system’s control 6 Module 2 IPC in Shared-Memory Systems  Example for cooperating process: Producer – Consumer Problem  A producer process produces information that is consumed by a consumer process  Example: Compiler – Assembler – Loader  One solution to the Producer-Consumer problem uses shared memory.  A buffer will reside in a region of memory that is shared by the 04-09-2024 producer and consumer processes 7 Module 2 IPC in Shared-Memory Systems  A producer can produce one item while the consumer is consuming another item.  The producer and consumer must be synchronized, so that the consumer does not try to consume an item that has not yet been produced.  unbounded buffer and bounded buffer 04-09-2024 8 Module 2 Memory shared by Producer and consumer Processes 04-09-2024 9 Module 2 The producer process using shared memory 04-09-2024 10 Module 2 The consumer process using shared memory 04-09-2024 11 Module 2 Producer – Consumer Problem  Write a C program to implement the producer consumer problem. 04-09-2024 12 Module 2 IPC in Message-Passing Systems  Message passing provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space.  useful in a distributed environment Example: an Internet chat program 04-09-2024 13 Module 2 IPC in Message-Passing Systems  A message-passing facility provides at least two operations: send(message) and receive(message)  Messages sent by a process can be either fixed or variable in size  If processes P and Q want to communicate, they must send messages to and receive messages from each other: a 04-09-2024 communication link must exist between them. 14 Module 2 IPC in Message-Passing Systems Methods for logically implementing a link for send()/receive() operations: 1. Direct or indirect communication 2. Synchronous or asynchronous communication 3. Automatic or explicit buffering 04-09-2024 15 Module 2 IPC in Message-Passing Systems Direct communication  Here, each process that wants to communicate must explicitly name the recipient or sender of the communication  send(P, message)—Send a message to process P.  receive(Q, message)—Receive a message from process Q.  A link is established automatically between every pair of 04-09-2024 processes that want to communicate. 16 Module 2 IPC in Message-Passing Systems symmetry addressing and asymmetry addressing Example for asymmetry in addressing  send(P, message)—Send a message to process P.  receive(id, message)—Receive a message from any process 04-09-2024 17 Module 3 IPC in Message-Passing Systems  With indirect communication, the messages are sent to and received from mailboxes, or ports.  A mailbox can be viewed abstractly as an object into which messages can be placed by processes and from which messages can be removed.  Each mailbox has a unique identification. 04-09-2024 18 Module 3 IPC in Message-Passing Systems  A process can communicate with another process via a number of different mailboxes, but two processes can communicate only if they have a shared mailbox.  Primitives for indirect communication  send(A, message)—Send a message to mailbox A. 04-09-2024  receive(A, message)—Receive a message from mailbox A. 19 Module 3 IPC in Message-Passing Systems  A process can communicate with another process via a number of different mailboxes, but two processes can communicate only if they have a shared mailbox.  A mailbox may be owned either by a process or by the operating system  Primitives for indirect communication  send(A, message)—Send a message to mailbox A. 04-09-2024  receive(A, message)—Receive a message from mailbox A. 20 Module 3 IPC in Message-Passing Systems  Message passing may be either blocking or nonblocking— also known as synchronous and asynchronous.  Blocking send. The sending process is blocked until the message is received by the receiving process or by the mailbox.  Nonblocking send. The sending process sends the message and resumes operation.  Blocking receive. The receiver blocks until a message is available.  Nonblocking receive. The receiver retrieves either a valid message or a 04-09-2024 null. 21 Module 3 IPC in Message-Passing Systems Buffering  Whether communication is direct or indirect, messages exchanged by communicating processes reside in a temporary queue.  Basically, such queues can be implemented in three ways: Zero capacity, Bounded capacity and Unbounded capacity 04-09-2024 22 Module 3 IPC in Message-Passing Systems Buffering  Zero capacity. The queue has a maximum length of zero  Bounded capacity. The queue has finite length n; thus, atmost n messages can reside in it.  Unbounded capacity. The queue’s length is potentially infinite The zero-capacity case is sometimes referred to as a message system with no buffering. The other cases are referred to as systems 04-09-2024 with automatic buffering 23 C o n currency Module 4 Module 4 Synchronization  A cooperating process is one that can affect or be affected by other processes executing in the system.  Cooperating processes can either directly share a logical address space or be allowed to share data only through shared memory or message passing.  Concurrent access to shared data may result in data inconsistency Module 4 Synchronization - Objective  The main objective of process synchronization is to  E n sure that m u ltiple processes access sh ared resou rces without interfering with each other and  To prevent the possibility of incon sisten t data due to concurrent access. Module 4 Synchronization  It is the way by which processes that share the same memory space are managed in an operating system.  It helps maintain the consistency of data by using variables or hardware so that only one process can make changes to the shared memory at a time.  Inconsistency of data can occur when various processes share a common resource in a system which is why there is a need for process synchronization in the operating system. Module 4 Synchronization Module 4 Synchronization  If a process1 is trying to read the data present in a memory location while another process2 is trying to change the data present at the same location, there is a high chance that the data read by the process1 will be incorrect. Module 4 Synchronization- Producer Consumer Problem Module 4 Synchronization - Producer Consumer Problem Since both processes manipulate the variable count concurrently, a situation like this occurs where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called a race condition. To guard against the race condition, we need to ensure that only one process at a time can be manipulating the variable count. Module 4 The Critical Section Problem Consider a system consisting of n processes {P0, P1,..., Pn−1}. Each process has a segment of code, called a critical section, in which the process may be accessing and updating data that is shared with at least one other process. The important feature is when one process is executing in its critical section, no other process is allowed to execute in its critical section. The critical-section problem is to design a protocol that the processes can use to synchronize their activity so as to cooperatively share data. Module 4 The Critical Section Problem Each process must request permission to enter its critical section. The section of code implementing this request is the entry section. The critical section may be followed by an exit section. The remaining code is the remainder section. Module 4 Elements/Sections of a Program/Process  Entry Section: The entry Section decides the entry of a process.  Critical Section: The Critical section allows and makes sure that only one process is modifying the shared data.  Exit Sec tion: The entry of oth er processes in the shared data after the execution of one process is handled by the Exit section.  Remainder Section: The remaining part of the code which is not categorized as above is contained in the Remainder section. Module 4 Critical Section Problem - Primitives Module 4 Requirements for Synchronization A solution to the critical-section problem must satisfy the following three requirements:  Mutual exclusion: If a process is running in the critical section, no other process should be allowed to run in that section at that time.  Progress: If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely. Module 4 Requirements for Synchronization  Bounded waiting. There exists a bound, or limit, on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. Module 4 Race Condition  When more than one process is either running the same code or modifying the same memory or any shared data, there is a risk that the result or value of the shared data may be incorrect because all processes try to access and modify this shared resource. This condition is called the race condition. Module 4 Race Condition  Since many processes use the same data, the results of the processes may depend on the order of their execution (Sequence of their execution)  In the critical section, a race condition occurs when the end result of multiple thread executions varies depending on the sequence in which the threads execute. Module 4 Race Condition Example Figure: Race condition when assigning a pid Module 4 Critical Section Problem  The critical-section problem could be solved simply in a single-core environment if we could prevent interrupts from occurring while a shared variable was being modified.  This solution is not as feasible in a multiprocessor environment. Module 4 Critical Section Problem  Two general approaches are used to handle critical sections in operating systems: preemptive kernels and nonpreemptive kernels.  A preemptive kernel allows a process to be preempted while it is running in kernel mode.  A nonpreemptive kernel does not allow a process running in kernel mode to be preempted; a kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields control of the CPU. Module 4 Critical Section Problem  A nonpreemptive kernel is essentially free from race conditions on kernel data structures.  A preemptive kernel may be more responsive, since there is less risk that a kernel-mode process will run for an arbitrarily long period before relinquishing the processor to waiting processes. Module 4 Critical Section Problem – What is the Problem?  A critical section is a code segm ent that can be accessed by only one process at a time. (How to do it?)  Design ing a way for cooperative processes to access sh ared resources without creating data inconsistencies. Module 4 Critical Section Problem – Peterson's solution  a classical software-based solution.  This solution makes sure that only one process executes the critical section at any point in time.  It is restricted to two process that alternate execution between the critical execution and remainder execution. Module 4 Critical Section Problem – Peterson's solution Peterson's solution Two shared variables that are used by the processes.  A boolean flag[]: A Boolean array Flag indicates if a process is ready to enter into its critical section  int turn: indicates whose turn it is to enter into critical section. Module 4 Critical Section Problem – Peterson's solution Peterson's solution / / Shared variables bool flag = {false, false}; / / Initially both flags are false int turn = 0; / / Initially turn is 0 (P1's turn) Module 4 Critical Section Problem – Peterson's solution Module 4 Critical Section Problem – Peterson's solution while (true) { while (true) { flag[i] = true; flag[j] = true; tu rn = j; //giving priority to Pj (the jth process) in case Pj is also trying to enter the turn = i; critical section. while (flag[i] & & turn == i) { while (flag[j] & & turn == j) { / / Busy wait until process i's flag / / Busy wait until process j's flag is false is false or turn is j or turn is i } } / / Critical Section (Pi) / / Critical Section (Pj) flag[i] = false; / / Process i has exited C S flag[j] = false; / / Process j has exited C S Module 4 Critical Section Problem – Peterson's solution  E a c h process has a flag variable (e.g., flag for P0 and flag for P1).  The flag[i] is set to tru e when process Pi wants to enter the critical section, signaling its intent.  A turn variable is used to decide which process should proceed if both processes are trying to enter the critical section simultaneously.  This ens u res that the oth er process gets a fair chan ce to access the critical section. Module 4 Critical Section Problem – Peterson's solution  Before Entering the Critical Section:  A process Pi sets flag[i] = tru e, indicating it wants to enter the critical section.  It then sets turn = j, where j is the index of the other process (e.g., if i = 0, then j = 1).  The process then enters a busy-wait loop: while (flag[j] & & turn == j). Module 4 Critical Section Problem  Before Entering the Critical Section:  This loop will only exit if the other process Pj is not interested in entering the critical section (flag[j] = false) or if the current process Pi is allowed to enter because the turn is in its favor (turn != j).  Critical Section:  Once the loop exits, process Pi enters the critical section. At this point, Pi can safely access shared resources without interference from Pj. Module 4 Critical Section Problem  Exiting Critical Section:  After finishing the work in the critical section, process Pi sets flag[i] = false, indicating it no longer needs access to the critical section. This action allows the other process Pj to proceed if it is waiting. Module 4 Critical Section Problem  Boolean flag and turn are both shared variable  Initially, the flags are false.  When a process wants to execute its critical section, it sets its  flag to true and  turn into the index of the other process  This means that the process wants to execute but it will allow the other process to run first. Module 4 Critical Section Problem  Set flag[i] to true to indicate its intention to enter the critical section.  Set turn to j to yield the right to enter the critical section to process j.  Enter a busy waiting loop, checking if flag[j] is true and if turn is j. If both conditions are met, the process continues waiting.  Once the conditions are no longer met, the process enters its critical section.  After exiting the critical section, set flag[i] to false to indicate that it has finished. Module 4 Critical Section Problem Peterson's solution  The process perform s busy waiting u n til the other process has finished its own critical section.  After this, the current process enters its critical section  After completin g the critical section, it sets it’s own flag to false, indicating it does not wish to execute anymore Module 4 Critical Section Problem Peterson's solution Busy Waiting:  Busy waiting is a technique in which a process repeatedly checks a condition to enter its critical section instead of being blocked.  Busy waiting occurs in the while loops where each process checks the flag and turn variables to determine if it’s safe to enter its critical section.  During busy waiting, the process consumes C P U cycles while it waits for the condition to become true. Module 4 Critical Section Problem Peterson's solution – Points  The turn variable acts as a tie-breaker when both processes want to enter the critical section at the same time.  If both processes set their respective flag to true (indicating their intent to enter the critical section), the turn variable helps to decide which process gets to enter first. Module 4 Critical Section Problem – Bakery Algorithm Module 4 Critical Section Problem – Bakery Algorithm Bakery Algorithm  Design ed to ensu re that only one process can access a critical section at a time.  It is suitable when we have many number of process i.e. n>2  It's particu larly u sefu l in distrib u ted systems where processes might be running on different machines. Module 4 Critical Section Problem Bakery Algorithm  E a ch process is assign ed a u n ique ticket n u m ber. These numbers are ordered lexicographically, meaning they are compared based on their individual digits from left to right.  Before entering the critical section, a process acquires a ticket number. This number is determined by finding the maximum ticket number among all other processes and then incrementing it. Module 4 Critical Section Problem Bakery Algorithm  A process waits until its ticket number is the smallest among all processes that are currently trying to enter the critical section or have already entered.  Once a process has the smallest ticket number, it can enter the critical section.  After exiting the critical section, the process releases its ticket, allowing other processes to acquire tickets and enter the critical section. Module 4 Critical Section Problem Bakery Algorithm - Example Consider three processes, P1, P2, and P3.  Initial State: All processes have their tickets set to 0.  P1 Acquires Ticket: P1 acquires ticket number 1.  P2 Acquires Ticket: P2 acquires ticket number 2.  P3 Acquires Ticket: P3 acquires ticket number 3. Module 4 Critical Section Problem Bakery Algorithm - Example  Now, P1 has the sm allest ticket n u m ber and can enter the critical section.  After P1 finishes and releases its ticket,  P2 has the smallest ticket number and can enter.  Finally, P3 will enter. Bakery Algorithm Bakery Algorithm Module 4 Critical Section Problem – Bakery Algorithm repeat choosing[i] := true; number[i] := max(number, number,... , number[n-1]) + 1; choosing[i] := false; for j := 0 to n - 1 do begin while choosing[j] do no-op; / / Wait until process j finishes choosing its number while n u m ber[j] != 0 and (nu m ber[j], j) < (nu m ber[i], i) do no-op; / / Wait until it's i's turn end; / / Critical section number[i] := 0; / / Exit the critical section / / Remainder section until false; Module 4 Critical Section Problem Bakery Algorithm - Points  Fairness: It ens u res that all processes eventu ally get to enter the critical section, preventing starvation.  Freedom from S tarvation: No process is indefinitely preven ted from entering the critical section.  Freedom from Deadloc k: The algorith m preven ts deadlocks by ensuring that processes always make progress.  Distributed: It can be u sed in distributed systems where processes may be running on different machines. Module 4 Critical Section Problem Bakery Algorithm - Points  Choosing Array (choosing[i]):  This is a boolean array where choosing[i] is set to true when process i is selecting a number (a "ticket") for itself before entering the critical section. It is set to false once the process has chosen its number Module 4 Critical Section Problem Bakery Algorithm - Points  Number Array (number[i]):  This array holds the number (or "ticket") for each process. The number represents the process's place in line to enter the critical section.  A higher number means the process will wait longer, and a lower number means it gets to enter sooner. Module 4 Critical Section Problem Bakery Algorithm - Points  Choosing the number or Ticket  choosing[i] := true;: Process i sets its choosing flag to tru e indicating that it's in the process of choosing a number.  Number[i] := max(number, number,... , number[n-1]) + 1;: Process i picks a number that is one greater than the maximum number in the number array. This step ensures that each process gets a unique and sequential number. Module 4 Critical Section Problem Bakery Algorithm - Points  ch oosin g[i] := false;: Process i sets its ch oosin g flag to false after it has chosen its number. Module 4 Critical Section Problem Bakery Algorithm - Points  Waiting to Enter the Critical Section:  for j := 0 to n - 1: Process i checks all other processes j.  while choosing[j] do no-op;: Process i waits until process j finishes choosing its number. This ensures that if j is in the middle of choosing its number, i does not make a decision based on incomplete information. Module 4 Critical Section Problem Bakery Algorithm - Points  Waiting to Enter the Critical Section:  while number[j] != 0 and (number[j], j) < (number[i], i) do no-op;:  If process j has a lower n u m ber (indicating it has higher priority) and j has not yet entered the critical section (number[j] != 0), process i will wait.  If n u m ber[j] == n u m ber[i], the process with the sm aller index j gets priority. This ensures fairness and prevents deadlock. Module 4 Critical Section Problem Bakery Algorithm - Points  Entering the Critical Section:  Once process i has confirmed that it has the smallest number (or highest priority) among all competing processes, it proceeds to enter the critical section.  Exiting the Critical Section:  number[i] := 0;: After finishing in the critical section, process i resets its number to 0, indicating that it no longer wishes to enter the critical section. Module 4 Critical Section Problem Bakery Algorithm - Points  Remainder Section:  The process can then perform oth er non -critical operations before repeating the process if it needs to enter the critical section again. Module 4 Critical Section Problem Bakery Algorithm - Points  S im plest kn own solu tions to the m utu al exclu sion problem for n processes.  E n su res that shared resou rces are u sed efficiently in a multithreaded environment.  It is free from starvation.  It uses FIFO  It works with atomic registers. Module 4 Critical Section Problem Bakery Algorithm - Points  It is unreliable because any one of the processes can fail and halt progress. C o n currency Module 4 Module 4 Critical Section Problem Hardware Support for Synchronization Module 4 Critical Section Problem Hardware Support for Synchronization Hardware-based solutions for critical sections leverage specialized instructions provided by the processor to ensure mutual exclusion. These instructions are atomic, meaning they execute as a single, indivisible operation, preventing other processes from interfering. Critical Section Problem Critical Section Problem Module 4 Critical Section Problem Synchronization Hardware 1 – Test and S et  It's a locking mechanism that prevents multiple processes from accessing the same resource  A shared Boolean variable, commonly known as a 'lock', is used.  When a process wants to access a resource, it 'tests' the lock. If the lock is false (unlocked), the process sets it to true (locked) and proceeds with its task. If the lock is true, the process waits until it's released. An Example Module 4 Critical Section Problem Synchronization Hardware 1 – Test and S e t boolean lock = false; boolean TestAndSet(boolean &target){ boolean returnValue = target; target = true; return returnValue; } while(1){ while(TestAndSet(lock)); CRITICAL S E C T I O N C O D E ; lock = false; REMAINDER SECTION CO D E ; } Module 4 Critical Section Problem Synchronization Hardware 1 – Test and Synchronization Hardware 1 – Test and Set Set Thread 1: Thread 2:  Enters the outer while(1) loop.  Enters the inner while(TestAndSet(lock))  Enters the outer while(1) loop. loop.  Enters the inner while(TestAndSet(lock))  TestAndSet sets lock to true and returns loop. false, exiting the inner loop.  TestAndSet sees that lock is already true  Executes the CRITICAL S E C T I O N C O D E. and retu rns true, con tinu ing the inn er  Sets lock to false.  E x ecutes the RE M AIND E R S E C TION loop. CODE.  Thread 2 will keep spinning in this loop  Goes back to the beginning of the outer until Thread 1 releases the lock. loop. Module 4 Critical Section Problem Synchronization Hardware 1 – Test and S et  The target parameter in the function is passed by reference, meaning that any changes made to target within the function will also affect the original variable that was passed to the function. In this case, the original variable is lock. Module 4 Critical Section Problem Synchronization Hardware 2 – Compare and Swap It involves two operations: 'swap' and 'unlock’. A boolean lock variable is used here as well. The 'swap' operation exchanges the valu e of the lock with a local variable. If the lock was false, it becomes true, and the process proceeds. The 'unlock' operation sets the lock back to false when the process is done. Synchronization Hardware 2 – Compare and Swap // Mutual exclusion with the compare and swap() instruction. Figure Mutual exclusion with the compare and swap() instruction Module 4 Critical Section Problem – Compare and Swap Synchronization Hardware 2 – Swap boolean lock = false; individual key = false; void swap(boolean &a, boolean &b){ boolean temp = a; a = b; b = temp; } while(1){ key=true; while(key==true) { swap(lock,key); } CRITICAL SECTION C O D E lock = false; REMAINDER SECTION C O D E Module 4 Critical Section Problem Synchronization Hardware – Compare and Swap  E a c h process executes the following loop:  Sets its key to true, indicating its intention to enter the critical section.  While key is still true, repeatedly swaps the values of lock and key.  If the swap operation is successful (i.e., lock was false and key was true), the process has acquired the lock and can enter the critical section.  After executing the critical section, the process sets lock to false to release the lock for other processes.  The loop continues, allowing the process to try entering the critical section again if necessary. Module 4 Critical Section Problem Synchronization Hardware – Compare and Swap  The com pare_ a n d_s wap instru ction attem pts to atom ically update the lock variable.  If the lock is cu rrently 0, it is set to 1, indicating that the process has entered the critical section.  If the lock is already 1, the process retries the loop u n til it successfully acquires the lock. Module 4 Critical Section Problem Synchronization Hardware – Compare and Swap  Variables:  key: A boolean variable for each process, indicating wheth er the process is trying to enter the critical section.  lock: A shared boolean variable representing the lock for the critical section. Process Synchronization – Mutex  The hardware-based solutions to the critical-section problem are complicated as well as inaccessible to application programmers.  Operating-system designers build higher-level software tools to solve the critical-section problem, one such tool is MUTEX.  The mutex lock is used to protect critical sections and thus prevent race conditions.  A process must acquire the lock before entering a critical section; it releases the lock when it exits the critical section. The acquire()function acquires the lock, and the release() function releases the lock Process Synchronization – Mutex  A mutex lock has a boolean variable available whose value indicates if the lock is available or not.  A process that attempts to acquire an unavailable lock is blocked until the lock is released.  Calls to either acquire() or release() must be performed atomically.  The main disadvantage of the implementation is that it requires busy waiting.  Busy waiting wastes CPU cycles that some other process might be able to use productively Process Synchronization – Mutex The type of mutex lock is also called a spinlock because the process “spins” while waiting for the lock to become available. Solution to the critical-section problem using mutex locks Process Synchronization – Semaphores Process Synchronization – Semaphores Semaphores  A semaphore S is an integer variable that is accessed only through two standard atomic operations: wait() and signal().  Introduced by the Dutch scientist Edsger Dijkstra  the wait() operation was originally termed P (to test) and signal() was originally called V (to increment) Process Synchronization – Semaphores Semaphores  U sed to enforce m u tu al exclu sion, avoid race conditions , and implement synchronization between processes.  Binary Semaphore and Counting Semaphore Process Synchronization – Semaphores Semaphores – Primitives - wait (P) and signal (V). Process Synchronization – Semaphores Semaphores – Primitives - wait (P) and signal (V).  The wait operation decrements the value of the semaphore, and the signal operation increments the value of the semaphore.  When the value of the semaphore is zero, any process that performs a wait operation will be blocked until another process performs a signal operation. Process Synchronization Semaphores – Primitives - wait (P) and signal (V).  When a process perform s a wait operation on a sem aph ore, the operation ch ecks wheth er the valu e of the sem aph ore is >0.  If so, it decrem ents the valu e of the sem aph ore and lets the process continue its execution;  otherwise, it blocks the process on the semaphore Process Synchronization Semaphores – Primitives - wait (P) and signal (V).  A signal operation on a semaphore activates a process blocked on the semaphore if any or increments the value of the semaphore by 1.  Due to these semantics, semaphores are also called counting semaphores. Process Synchronization Semaphores – Primitives - wait (P) and signal (V).  Wait()/Down()/P()- Helps in controlling the entry of processes in C S. The operation decrements the value of its argument as soon as a process enters the critical section.  Signal()/UP()/V()- Helps control the process's exit after execution from C S. Increment the value of its argument as soon as the execution is finished. Process Synchronization Semaphore Types Binary Counting Semaphore Semaphore Process Synchronization Binary Semaphores  This is also known as a mutex lock.  It can have only two values 0 and 1.  Its value is initialized to 1.  It is u sed to implem en t the solu tion of critical section problems with multiple processes. Process Synchronization Binary Semaphores Process Synchronization Binary Semaphores  If the value of the semaphore was initially 1, then on the entering of the process into the critical section, the wait function would have decremented the value to 0 meaning that no more processes can access the critical section (making sure of mutual exclusion -- only in binary semaphores). Process Synchronization Binary Semaphores  Once the process exits the critical section, the signal operation is executed an d the valu e of the semaphore is incremented by 1, mean in g that the critical section can now be accessed by another process. Process Synchronization Binary Semaphores int semaphore = 1; void P() { while (semaphore == 0) { / / Busy waiting until the semaphore is unlocked } semaphore = 0; void V() { semaphore = 1; / / Unlock the semaphore (exit critical section) } Process Synchronization Counting Semaphores  Counting semaphores can be used to control access to a given resource consisting of a finite number of instances.  Coordinate access to resources  Here, the Semaphore count is the number of resources available  If the value of the Semaphore is anywhere above 0, processes can access the critical section or the shared resources.  The n u m ber of processes that can access the resou rces/ code is the value of the semaphore Process Synchronization Counting Semaphores  if the value is 0, it means that there aren't any resources that are available, or the critical section is already being accessed by several processes and cannot be accessed by more processes.  Counting semaphores are generally used when the number of instances of a resource is more than 1, and multiple processes can access the resource. Process Synchronization Counting Semaphores Process Synchronization Counting Semaphores  P() (Wait): This decreases the value of the semaphore. If the value is greater than 0, the process can proceed. If it is 0, the process must wait until the value becomes greater than 0.  V() (Signal): This increases the value of the semaphore, signaling that a resource has been released and is available for other processes Process Synchronization Counting Semaphores int semaphore = 3; void P() { while (semaphore == 0) { / / Busy waiting until a resource becomes available } semaphore--; / / Decrease the semaphore value (acquire a resource) } void V() { semaphore++; / / Increase the semaphore value (release a resource) } / / Example usage with multiple processes using a shared resource Process Synchronization Semaphores Disadvantage Busy Waiting - it wastes CPU cycles - It is also called as Spinlock Process Synchronization – Monitors Process Synchronization – Monitors  Monitors are a programming language component that aids in the regulation of shared data access.  The Monitor is a package that contains shared data structures, operations, and synchronization between concurrent procedure calls. Process Synchronization – Monitors  An abstract data type—or ADT—encapsulates data with a set of functions to operate on that data that are independent of any specific implementation of the ADT.  A monitor type is an ADT that includes a set of programmer- defined operations that are provided with mutual exclusion within the monitor. Process Synchronization Why Monitor?  Various types of errors can be generated easily when programmers use semaphores or mutex locks incorrectly  S o we n e e d a h i gh - l eve l synchronization tool Process Synchronization Monitor  Processes operating outside the monitor can't access the monitor's internal variables, but they can call the monitor's procedures. Fig: Pseudocode syntax of a monitor Process Synchronization Monitor  monitor_name is the name of the monitor  p1, p2, pn are the procedures that provide access to sh ared data and variables enclosed in monitors  It ensu re that only one thread is accessed at a time for u sing the shared resource. Process Synchronization Monitor - Components  Initialization: Initialization code is executed when a monitor is initialized. It is needed once when the monitors are created and initialized.  It helps initialize any shared data structures that the monitor will use to perform specific tasks.  Initialization code is used to initialize a variable when we want to use it. Process Synchronization Monitor - Components  Private D ata: It is an essential featu re of mon itors that helps make the data private.  It is involved in holding mon itors’ confidential data , which includes private functions.  Private fields and fun ction s are not visible ou tside of the monitor. Process Synchronization Monitor - Components  Monitor Procedure: Procedures operate on shared variables defined inside the monitor.  Monitor procedures or fun ction s can be invoked outside the monitor.  These are fun ction s or meth ods which are executed within the monitors’ context.  They are usually used for shared resource manipulation. Process Synchronization Monitor - Components  Queue: Queue data structure maintains a list of threads waiting for a shared resource.  Whenever we request a thread to use shared resources, but another thread is currently using the shared resource, then we cannot use the shared resource at that time.  But whenever the resource is in its available state, the thread at the front is granted access to the resource. Process Synchronization – About Monitor A function defined within a monitor can access only those variables declared locally within the monitor and its formal parameters. The monitor construct ensures that only one process at a time is active within the monitor. In monitor, to provide synchronization additional mechanisms are provided by the condition construct. Process Synchronization - Monitor  Condition Variables: A programmer who needs to write a tailor- made synchronization scheme can define one or more variables of type condition: condition x, y;  The only operations that can be invoked on a condition variable are wait() and signal(). The operation x.wait(); means that the process invoking this operation is suspended until another process invokes x.signal(); Process Synchronization  Wait() Function: This function is used for releasing a mutex associated with the monitor.  It waits for a particular condition to be valid.  A mutex is a program object that enables sharing of the same resources by multiple programs by taking turns.  When a thread calls the wait() function, it adds a thread to the list and puts it to sleep.  It releases the mutex and enters a blocked state until another thread signals the condition.  Therefore, variables are suspended and placed in the condition variables' block queue whenever we perform a wait operation.  Signal() Function: This function is used to wake up a waiting thread.  When a thread calls the function, it allies the thread to proceed by waking it up.  This function gives a chance to one of the blocked variables. Process Synchronization Monitor – Overall Working  When a thread wants to access the shared data, it enters the monitor. If no other thread is using the monitor, the thread proceeds.  If another thread is already inside the monitor, the new thread is blocked until the monitor becomes available (the current thread exits the monitor).  Inside the monitor, condition variables can be used to let a thread wait until a specific condition is met (e.g., waiting for input or for a buffer to be filled). When this condition occurs, another thread signals it, allowing the waiting thread to proceed Process Synchronization Schematic view of a monitor. Process Synchronization Monitor with condition variables Implementing a Monitor Using Semaphores Implementation of the monitor mechanism using semaphores.  For each monitor, a binary semaphore mutex (initialized to 1) is provided to ensure mutual exclusion.  A process must execute wait(mutex) before entering the monitor and must execute signal(mutex) after leaving the monitor.  Signaling processes can use an additional binary semaphore, next (initialized to 0). to suspend themselves.  An integer variable next count is also provided to count the number of processes suspended on next. Resuming Processes within a Monitor How do we determine which of the suspended processes should be resumed next? FCFS (or)  the conditional wait construct can be used- x.wait(c);  where c is an integer expression that is evaluated when the wait() operation is executed. The value of c, which is called a priority number, is then stored with the name of the process that is suspended.  When x.signal() is executed, the process with the smallest priority number is resumed next. Implementing a Monitor Using Semaphores A monitor to allocate a single resource Resuming Processes within a Monitor How do we determine which of the suspended processes should be resumed next? Resource allocate monitor  Each process, when requesting an allocation of this resource, specifies the maximum time it plans to use the resource.  The monitor allocates the resource to the process that has the shortest time-allocation request. Resuming Processes within a Monitor How do we determine which of the suspended processes should be resumed next? A process that needs to access the resource in question must observe the following sequence: R.acquire(t); …. access the resource; R.release(); where R is an instance of type ResourceAllocator. Resuming Processes within a Monitor Problems that are faced by the monitor A process might access a resource without first gaining access permission to the resource. A process might never release a resource once it has been granted access to the resource. A process might attempt to release a resource that it never requested. A process might request the same resource twice Resuming Processes within a Monitor Solutions Include the resource access operations within the ResourceAllocator monitor. Check for two conditions to establish the correctness of this system - user processes must always make their calls on the monitor in a correct sequence - Ensure that an uncooperative process does not simply ignore the mutual-exclusion gateway provided by the monitor and try to access the shared resource directly, without using the access protocols C o n currency Module 4 Module 4 Classic Problems of Synchronization Module 4 The Bounded-Buffer Problem Producer-consumer problem The pool consists of n buffers, each capable of holding one item. The producer must not insert data when the buffer is full and the consumer must not consume data when the buffer is empty The producer and consumer should not operate simultaneously Module 4 The Bounded-Buffer Problem Solution using Semaphore – Data structures used m (mutex) – binary semaphore to acquire and release the lock empty – a counting semaphore whose initial value is the number of buffers in the pool (i.e. initially the pool is empty) Full – a counting semaphore whose initial value is 0 Module 4 The Bounded-Buffer Problem Solution using Semaphore Module 4 The Readers Writers Problem  Suppose that a database is to be shared among several concurrent processes.  Some of these processes may want only to read the database, whereas others may want to update (that is, read and write) the database.  If two readers access the shared data simultaneously, no adverse effects will result.  However, if a writer and some other process (either a reader or a writer) access the database simultaneously, chaos may ensue. Module 4 The Readers Writers Problem To ensure that these difficulties do not arise, we require that the writers have exclusive access to the shared database while writing to the database. This synchronization problem is referred to as the readers– writers problem Module 4 The Readers Writers Problem Several ways to implement 1. No reader be kept waiting unless a writer has already obtained permission to use the shared object 2. Once a writer is ready, that writer perform its write as soon as possible But these solutions may lead to starvation Module 4 The Readers Writers Problem Implementation using Semaphore 1. mutex, a semaphore (initialized to 1) which is used to ensure mutual exclusion when readcount is updated. 2. wrt, a semaphore (initialized to 1) common to both reader and writer processes 3. Readcount – an integer variable (intialized to 0) that keeps track of how many processes are currently reading the object. Module 4 The Bounded-Buffer Problem Solution using Semaphore Module 4 The Dining-Philosophers Problem - problem definition - classic synchronization problem Figure: The situation of the dining philosophers. Module 4 Critical Section Problem Data structures used Philosopher executes a wait() on semaphore to grab a chopstick Use signal() to release a chopstick Five chopsticks- five semaphore i.e. semaphore chopstick; Synchronization Hardware 2 – Compare and Swap No t wo Phi l o so pher wi l l st art eat at a t i me Th en t he so l ut i o n i s fi n e. Module 4 Critical Section Problem Several possible remedies to the deadlock problem are the following: Allow at most four philosophers to be sitting simultaneously at the table. Allow a philosopher to pick up her chopsticks only if both chopsticks are available Use an asymmetric solution—that is, an odd-numbered philosopher picks up first her left chopstick and then her right chopstick, whereas an even numbered philosopher picks up her right chopstick and then her left chopstick. Module 5 Memory Management Module 5 Memory  Memory consists of a large array of bytes, each with its own address.  The CPU fetches instructions from memory according to the value of the program counter.  The memory unit sees only a stream of memory addresses; it does not know how they are generated or what they are for. 22/09/24 2 Module 5 Direct Access to Memory and Registers  The C P U can directly access only two types of storage: main memory and registers built into the processing cores.  Any data or instruction being used must be in one of these locations. If it is not, the data must be moved into memory for the C P U to access.  Registers are extremely fast (accessible in one clock cycle), but memory access is slower as it requires communication over the memory bus. So, cache is used. 3 Module 5 Hardware Memory Protection  For proper operation, it’s essential to protect the operating system from unauthorized access by user processes and to prevent user processes from accessing each other's memory.  This protection is enforced by hardware using mechanisms like base and limit registers.  Each process has a separate memory space with a range of legal addresses that the process may access. 22/09/24 4 Module 5 Hardware Memory Protection  Base register holds the smallest legal memory address for a process.  Limit register specifies the range of legal addresses.  The CPU compares every address generated in u ser mode against these registers.  Any illegal access resu lts in a trap to the operating system, wh i c h t re a t s t h e a t te m p t a s a fa t a l e r ro r. 22/09/24 5 Module 5 Hardware Memory Protection 22/09/24 6 Module 5 Hardware Memory Protection  Base Address Register: The starting memory address of a process in main memory. It defines the lowest legal memory address that the process can access.  Limit Register: Specifies the size of the process or the range of addresses the process is allowed to access. The value in the limit register is typically the length of the addressable memory region for that process, starting from the base address.  The base address tells where the process begins in memory.  The limit tells how m u c h memory (in terms of size) the process can access, thus defining the upper boundary of accessible memory as base + limit 22/09/24 7 Module 5 Hardware Memory Protection 22/09/24 8 Module 5 User and Kernel Mode  The base and limit registers can be loaded only by the operating system, which uses a special privileged instruction.  Protection mechanisms ensure that user processes cannot access kernel memory or manipulate hardware directly.  Only the operating system can modify the value of base and limit registers, as this operation can only be performed in kernel mode. User programs cannot change these register values. 22/09/24 9 Module 5 User and Kernel Mode  The operating system, executing in kernel mode, is given unrestricted access to both operating-system memory and users' memory 22/09/24 10 Module 5 Address binding - Process Execution  Program s are stored as binary executable files on disk and need to be loaded into memory to run.  The process accesses instru ction s and data from memory during execution.  When process terminates, and its memory is reclaimed for use by other processes. 22/09/24 11 Module 5 Address Binding  Address B inding refers to the process of m appin g logical (virtual) addresses to physical addresses in memory.  It occurs in different stages during a program's lifecycle: compile-time, load-time, and execution-time.  The m ain goal of address binding is to ensu re that when a program accesses m em ory, it can find the right data or instruction in physical memory. 22/09/24 12 Module 5 Address Binding  In most cases, a user program goes through several steps - Addresses maybe represented in different ways:  In a source code, symbolic addresses are identifiers for variables, functions, classes, and other constructs.  A compiler typically binds these symbolic addresses to relocatable addresses  The linker or loader in turn binds the relocatable addresses to absolute addresses. 22/09/24 13 Module 5 Address Binding  The binding of instructions and data to memory addresses can be done at any step along the way:  Compile time. If you know at compile time where the process will reside in memory, then absolute code can be generated.  Load time. If it is not known at compile time where the process will reside in memory, then the compiler must generate relocatable code.  Execution time. If the process can be moved during its execution from one memory segment to another, then binding must be delayed until run time. 22/09/24 14 Module 5 Address Binding 22/09/24 15 Module 5 Address Binding - Compile-Time Address Binding  Address B inding refers to m appin g sym bolic or relocatable addresses in the program to actual memory addresses  The compiler determ ines where a progra m will reside in memory.  The addresses in the program are set during compilation, and if the program’s memory location needs to change later (e.g., if the program is loaded into a different memory location), it must be recompiled. 22/09/24 16 Module 5 Address Binding - Compile-Time Address Binding  Example  Suppose a small program is always loaded into memory startin g at address 1000. The compiler generates m ach in e code that expects the program to start at this address.  If the program is loaded elsewh ere (e. g. , address 2000), the code will not work u n less it is recom piled to reflect this new memory location. 22/09/24 17 Module 5 Address Binding - Load-Time Address Binding  The exact memory location where the program will be loaded is not known at compile time.  The linker or loader binds addresses when the program is loaded into memory. The addresses are relocatable, meaning they can adjust based on where the program is loaded in memory. 22/09/24 18 Module 5 Address Binding - Load-Time Address Binding Example  The program is compiled with relocatable addresses, which say, "I need memory starting from some location." If the program is loaded into address 3000, the loader adjusts all addresses to work from that starting point.  If the program is later loaded into address 5000, the loader adjusts the addresses accordingly, but no recompilation is needed. 22/09/24 19 Module 5 Address Binding - Execution-Time Address Binding  The program can be moved in memory during its execution (e.g., due to swapping or paging).  The binding happens dynamically at run time, meaning the operating system keeps track of where the program is at any given time and translates addresses on-the-fly. 22/09/24 20 Module 5 Address Binding - Execution-Time Address Binding  Example  The program starts at address 4000, but during execution, the operatin g system moves part of it to address 7000 (perhaps due to memory management or multitasking).  Every time the program tries to access a memory location, the operatin g system tran slates its logical addresses to the current physical address in memory. 22/09/24 21 Module 5 Address Binding - Examples  Example  When you write code like int x = 5;, the compiler assigns an address to x, say 1000.  If the compiler knows that the program will always be loaded startin g at memory location 4000, it can assign x a fixed address of 5000 (4000 + 1000).  This is ? 22/09/24 22 Module 5 Address Binding - Execution-Time Address Binding  Example  Now, imagine the program might not always start at 4000. It could start at any available memory address.  The program is compiled with relocatable addresses. For example, x might be stored at offset 1000 relative to the program's starting address.  When the program is loaded into memory (e.g., starting at 8000), the loader will adjust the address of x to 9000 (8000 + 1000). 22/09/24 23 Module 5 Address Binding - Execution-Time Address Binding  Example  If you r progra m is moved to another part of memory (e. g. , beca u se of paging or swappin g), the operatin g system will adjust the memory addresses dynamically during execution.  For exam ple, if the O S moves the program to 12000, the address of x will be updated to 13000 (12000 + 1000) while the program is running. 22/09/24 24 Module 5 Logical Address Space  A logical address is an address generated by the CPU during program execution.  It is the address used by a program to reference memory locations.  This address is visible to the user or programmer and is often seen in the program as symbolic references (like variables, arrays, etc.).  Logical addresses are part of the logical address space (also known as the virtual address space), which is the set of all possible addresses that a program can generate  The logical address does not correspond directly to an actual location in physical memory. It is translated into a physical address by the memory management unit (MMU) via address translation techniques like paging or segmentation. 22/09/24 25 Module 5 Logical Address Space  Why Use Logical Addresses?  Logical addresses provide an extra layer of abstraction. This allows program s to run with ou t needing to know exactly where their data is stored in physical memory.  It also gives flexibility to the operatin g system, allowing it to move program s arou nd in memory with ou t the program s needing to know. 22/09/24 26 Module 5 Physical Address Space  A physical address is the actual address in the main memory (RAM).  It represen ts the real location of data or instruc tions in physic al memory hardware.  This address is not visible to the user or programmer and is only used by the hardware (the memory management unit, C P U, and OS).  Physical addresses are part of the physical address space, which is the set of all physical memory locations in the hardware.  The physical address is obtained by translating th e logical address using the memory management unit (MMU). 22/09/24 27 Module 5 Physical Address Space  When a program is run n in g, the C P U generates a logical address (e. g. , 0x005A). This address is not the actual location in physical memory.  The M M U tran slates this logical address to a physical address (e. g. , 0xF12345), which is where the data is stored in the RAM. 22/09/24 28 Module 5 Memory Management Unit (MMU)  The M M U is a special hardware component in the computer that helps translate the virtual (logical) addresses used by a program into physical addresses that point to actual locations in the computer’s memory (RAM).  Think of the M M U as a translator that maps the addresses a program uses into real memory addresses behind the scenes 22/09/24 29 Module 5 Memory Management Unit (MMU)  When a program is running, it doesn’t directly use the physical addresses where data is stored in memory. Instead, it uses virtual addresses, which are more flexible and can be different from the physical addresses.  The MMU converts these virtual addresses to physical addresses so that the C P U can find the correct location in the actual memory. 22/09/24 30 Module 5 Memory Management Unit (MMU)  For example, the base register is now called a relocation register.  The relocation register is a special part of the MMU that holds a fixed offset (a number). This offset helps the M M U shift (or relocate) the virtual address to the correct physical address.  This offset is the difference between where the program thinks it’s running and where it’s actually located in physical memory. 22/09/24 31 Module 5 Memory Management Unit (MMU)  Relocation Register - Example  Imagine a process starts its virtu al address at 0, b u t in real memory, it starts at physical address 5000.  The relocation register will hold the value 5000.  So, if the program asks for data at virtu al address 20, the M M U adds 5000 to this, m akin g the actu al physical address 5020. 22/09/24 32 Module 5 Memory Management Unit (MMU)  Relocation Register - Example 22/09/24 33 Module 5 Static Loading  When a program is about to run, static loading means that everything the program needs (code, data, etc.) is loaded into memory before it starts.  The memory locations for all components are set in advance, and they don’t change while the program is running. Pros: Fast access to everything since it’s already in memory. Cons: This can waste memory if not all components are actually used during the program’s run. Example: Imagine opening a big book. You copy the entire book onto your desk even if you might only read a few pages. 22/09/24 34 Module 5 Static Linking  Static linking happens when the necessary libraries are added directly into the program’s file when it is created.  The final program file contains everything it needs to run on its own, so it doesn’t need to find any external libraries at runtime.  Pros: It makes the program independent and portable  Cons: The file becomes larger, and if many programs use the same library, each will have its own copy, leading to redundancy. 22/09/24 35 Module 5 Dynamic Linking  With dynamic linking, the program does not include the libraries in its final file. Instead, it connects to these libraries while running.  This allows many programs to share a single copy of a library, saving memory and reducing file size.  Pros: Saves space and avoids redundancy.  Cons: The program depends on these libraries being available on the system during runtime 22/09/24 36 Module 5 Dynamic Loading  In dynamic loading, the program loads components into memory only when they are needed during execution. It doesn’t load everything upfront.  With dynamic loading, a routine is not loaded until it is called. All routines are kept on disk in a relocatable load format.  Then the relocatable linking loader is called to load the desired routine into memory and to update the program's address tables to reflect this change. 22/09/24 37 Module 5 Dynamic Loading  The advantage of dynamic loading is that a routine is loaded only when it is needed.  Dynamic loading does not require special support from the operating system.  Better memory management, as unused parts don’t take up space.  It ca n slow down the program slightly if it has to load new components while running. 22/09/24 38 Module 5 Memory Management Module 5 Contiguous Memory Location 22/09/24 2 Module 5 Introduction  The computer's memory must hold both the operating system (which controls the computer) and the various programs (or processes) that users are running.  The memory is typically divided into two main sections: one for the operating system and one for user processes.  Often, several user processes are running at the same time. This means the system needs to decide how to allocate the available memory to these processes. 22/09/24 3 Module 5 Contiguous Memory Location In contiguous memory allocation, each process is assigned a single, continuous section of memory and these sections are placed next to each other. 22/09/24 4 Module 5 Memory Protection When the CPU scheduler selects a process for execution, the dispatcher loads the relocation and limit registers. Every address generated by a CPU is checked against these registers The relocation-register scheme provides an effective way to allow the operating system’s size to change dynamically. 22/09/24 5 Module 5 Memory Protection 22/09/24 6 Module 5 Memory Location - Fixed/Static Partitioning  In this m eth od , the memory is divided into fixed-size blocks before any processes are loaded.  A problem arises when a process doesn't perfectly fit into a block. This leaves u n u sed gaps called "fragm en ts," which can waste memory.  Fragm en tation m akes it harder to allocate memory to new processes, and it can reduce the overall available memory. 22/09/24 7 Module 5 Memory Location - Dynamic Partitioning  One of the simplest methods of allocating memory is to assign processes to variably sized partitions in memory, where each partition may contain exactly one process.  The operating system keeps track of which parts of memory are available and which are occupied using a memory table. 22/09/24 8 Module 5 Memory Location - Dynamic Partitioning Figure: Variable partition. 22/09/24 9 Module 5 Memory Location - Dynamic Partitioning  Initially, all memory is considered one large "hole" that can be divided into smaller holes as needed.  When a process arrives, the system searches for a hole that's big enough to fit it. If the hole is too large, it's split into two parts.  When a process finishes, its memory is returned to the set of holes, which can then be merged with adjacent holes to create larger ones. 22/09/24 10 Module 5 Memory Location - Dynamic Partitioning  The dynamic storage allocation problem involves effectively managing memory by satisfying requests for memory blocks of varying sizes from a list of available memory blocks (holes).  A hole is a block of contiguous memory that is currently unoccupied. It's available for allocation to processes or other data structures. 22/09/24 11 Module 5 Dynamic Storage Allocation Problem How to satisfy a request of size n from a list of free holes? Three strategies first-fit, best-fit, and worst-fit strategies are the ones most commonly used to select a free hole from the set of available holes. 22/09/24 12 Module 5 Dynamic Storage Allocation Problem First fit. Allocate the first hole that is big enough. Searching can start either at the beginning of the set of holes or at the location where the previous first-fit search ended. For example, suppose we have the following memory partitions: | 10 KB | 20 KB | 19 KB | 25 KB | 30 KB | Now, a process requests 18 KB of memory. The OS starts searching from the beginning and finds the first free partition of 20 KB. It allocates the process to that partition and keeps the remaining 2 KB as free memory. 22/09/24 13 Module 5 Dynamic Storage Allocation Problem Best fit. Allocate the smallest hole that is big enough. We must search the entire list, unless the list is ordered by size. For example, suppose we have the following memory partitions: | 10 KB | 20 KB | 19 KB | 25 KB | 30 KB | Now, a process requests 18 KB of memory. The OS starts searching from the beginning and finds the partition of 19 KB. 22/09/24 14 Module 5 Dynamic Storage Allocation Problem Worst fit. Allocate the largest hole. Again, we must search the entire list, unless it is sorted by size. For example, suppose we have the following memory partitions: | 10 KB | 20 KB | 19 KB | 25 KB | 30 KB | Now, a process requests 18 KB of memory. The operating system searches for the largest free partition, which is 30 KB. It allocates the process to that partition and keeps the remaining 12 KB as free memory. 22/09/24 15 Module 5 Fragmentation  First Fit is fast and simple to implement  first-fit and best-fit strategies for memory allocation suffer from External fragmentation  External fragmentation exists when there is enough total memory space to satisfy a request but the available spaces are not contiguous 22/09/24 16 Module 5 Fragmentation  Statistical analysis of first fit, reveals that, even with some optimization, given N allocated blocks, another 0.5 N blocks will be lost to fragmentation.  That is, one-third of memory may be unusable. This property is known as the 50-percent rule.  This will increase the overhead so as to keep track of this holes  To avoid, break the physical memory into fixed-sized blocks and allocate memory in units based on block size 22/09/24 17 Module 5 Fragmentation  With this approach, the memory allocated to a process may be slightly larger than the requested memory. The difference between these two numbers is internal fragmentation — unused memory that is internal to a partition.  One solution to the problem of external fragmentation is compaction. The goal is to shuffle the memory contents so as to place all free memory together in one large block 22/09/24 18 Module 5 Fragmentation  Another possible solution to the external-fragmentation problem is to permit the logical address space of processes to be noncontiguous, thus allowing a process to be allocated physical memory wherever such memory is available.  This is the strategy used in paging, the most common memory- management technique for computer systems. 22/09/24 19 Module 5 Contiguous Memory Allocation Schemes – Problem 1 Suppose you have a memory with the following free blocks: 12KB, 6KB, 14KB, 19KB, 21KB. Four processes arrive that require 10KB, 4KB, 7KB, and 18KB of memory, respectively. Using the different allocation techniques, how the process would be allocated to memory. 04-10-2024 20 Module 5 Contiguous Memory Allocation Schemes – Problem 1 – First Fit Process Process Block Availability Can Occupy Block Size Block Status Memory Wastage No. Size No. Status Process ? B1 12KB Yes B2 6KB Yes B3 14KB Yes B4 19KB Yes B5 21KB Yes 04-10-2024 21 Module 5 Contiguous Memory Allocation Schemes – Problem 1 - First Fit Process Process Block Availability Can Occupy Process Block Block Size Memory Wastage No. Size No. Status ? Status P1 10KB B1 12KB Yes Check 10 PA Program Virtual Address Physical Address VA PA Space ld, R3, 1024 (R0) 512 2 0 786 Disk 1 1024 2 2 3 How Virtual Memory Works  Program Executes an instruction load specifying a Virtual Address Space.  Translates this address to the physical address. MAP Helps in Translation VA -> PA Program Virtual Address Physical Address VA PA Space ld, R3, 1024 (R0) 512 2 0 Ld, R2, 512(R0) 786 Disk 1 1024 2 Data for R3. 2 3 How Virtual Memory Works  Program Executes an instruction load specifying a Virtual Address Space.  Translates this address to the physical address. MAP Helps in Translation VA -> PA Program Virtual Address Physical Address VA PA Space ld, R3, 1024 (R0) 512 2 0 Ld, R2, 512(R0) 786 Disk 1 1024 2 Data for R3. 2 3 How Virtual Memory Works  Program Executes an instruction load specifying a Virtual Address Space.  Translates this address to the physical address. MAP Helps in Translation VA -> PA Program Virtual Address Physical Address VA PA Space ld, R3, 1024 (R0) 512 12 0 ld, R2, 512(R0) 786 Disk 1 Add, R3 and R2 1024 2 Data for R3. 2 ld, R5, 786(R0) Data for R2. 12 How Virtual Memory Works  Program Executes an instruction load specifying a Virtual Address Space.  Translates this address to the physical address.  If PA is not in memory, the OS loads it in from Disk, then updates Map Table. Then program reads from RAM. MAP Helps in Translation VA -> PA Program Virtual Address Physical Address VA PA Space ld, R3, 1024 (R0) 512 12 0 ld, R2, 512(R0) 786 Disk 1 Add, R3, R2 1024 2 Data for R3. 2 ld, R5, 786(R0) Data for R2. 12 How Virtual Memory Works  Program Executes an instruction load specifying a Virtual Address Space.  Translates this address to the physical address.  If PA is not in memory, the OS loads it in from Disk, then updates Map Table. Then program reads from RAM. MAP Helps in Translation VA -> PA Program Virtual Address Physical Address VA PA Space ld, R3, 1024 (R0) 512 12 0 ld, R2, 512(R0) 786 Disk 1 Add, R3, R2 1024 2 Data for R3. 2 ld, R5, 786(R0) Data for R2. 12 How Virtual Memory Works  Program Executes an instruction load specifying a Virtual Address Space.  Translates this address to the physical address.  If PA is not in memory, the OS loads it in from Disk, then updates Map Table. Then program reads from RAM. MAP Helps in Translation VA -> PA Program Virtual Address Physical Address VA PA Space ld, R3, 1024 (R0) 512 12 0 ld, R2, 512(R0) 786 Disk 1 Add, R3, R2 1024 2 Data for R3. 2 ld, R5, 786(R0) Data for R2. 12 How Virtual Memory Works  Program Executes an instruction load specifying a Virtual Address Space.  Translates this address to the physical address.  If PA is not in memory, the OS loads it in from Disk, then updates Map Table. Then program reads from RAM. MAP Helps in Translation VA -> PA Program Virtual Address Physical Address VA PA Space ld, R3, 1024 (R0) 512 12 0 ld, R2, 512(R0) 786 0 1 Add, R3, R2 1024 2 Data for R3. 2 ld, R5, 786(R0) Data for R2. 12 How Virtual Memory Works  Program Executes an instruction load specifying a Virtual Address Space.  Translates this address to the physical address.  If PA is not in memory, the OS loads it in from Disk, then updates Map Table. Then program reads from RAM. MAP Helps in Translation VA -> PA Program Virtual Address Physical Address VA PA Space ld, R3, 1024 (R0) 512 12 Data for R5. 0 ld, R2, 512(R0) 786 0 1 Add, R3, R2 1024 2 Data for R3. 2 ld, R5, 786(R0) Data for R2. 12 Page Table  The Map from virtual addresses (VA) to Physical Addresses (PA) is the page Table.  In our example, page table had one page table entry (PTE) for every virtual address. MAP Helps in Translation VA -> PA Program Virtual Address Physical Address VA PA Space ld, R3, 1024 (R0) 512 12 Data for R5. 0 ld, R2, 512(R0) 786 0 1 Add, R3, R2 1024 2 Data for R3. 2 ld, R5, 786(R0) Data for R2. 12 Page Table - Size  How big is this Page table ? When each word (since virtual address are word access) has 1 entry ?  It is 1 for every word so 230 = 1 billion entries.  According to our example each entry needs the PA which is 32 bits so that’s 1GB of memory for the Page Table alone. Page Table Translation VA -> PA VA PA 512 12 786 0 1024 2 Page Table - Size  We need to translate every possible address.  Programs have 32 bit virtual address space.  i.e. 232 words that need page table entries (1 billion Entries)  If page table entry is not there then that program cannot be run or accessed in physical memory.  Question is how making page table more manageable ?  Divide memory into chunks called Pages instead of words  Each Entry in Coarse grain handles 4kB of Data compared to 4B earlier as a word entry Page Table Translation Page Table Translation VA -> PA VA -> PA VA PA VA PA FINE-GRAIN COARSE-GRAIN 512 12 Maps Each Word to 0 – 4095 4096 - 8191 Maps Chunks (Pages) to Physical Address Main Memory 786 0 (as seen before) 1024 2 Page Table - Pages  The Page Table Manages larger chunks (Pages) of Data:  Fewer page table entries needed to cover the whole address space  It is less flexibility, Need to move the complete page as it is in RAM before access.  Typically, 4Kb Pages (1024 words per page), so we need to have 1 billion words in 1- million-page table entry. That is close to 4 MB Page Table Translation VA -> PA VA PA COARSE-GRAIN 0 – 4095 4096 - 8191 Maps Chunks (Pages) to Main Memory Page Table – Pages – How do we Map Virtual Physical Address Address Space Space 16383 12287 Page Table Translation 4Kb VA -> PA 4Kb 12288 VA PA 8192 12287 8191 0 – 4095 4096 - 8191 4Kb 4Kb 8192 4096 8191 4095 4Kb 4Kb 4096 4095 0 4Kb 0 Page Table – Pages – How do we Map Virtual What is the physical address for VA 4 ? Physical Address Address Space

Use Quizgecko on...
Browser
Browser