OP Revision Data Science.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

advantages, ‫ كل التعاريف واذا كان فيه‬:‫بخصوص املقالي‬ disadvantages, limitations or types! And based on the concepts like: de ne deadlock and what we can do in case of deadlock avoidance or prevention “These types of topics “ MCQ‫التعاريف البسيطة نتعلمها لل‬ ‫اما...

advantages, ‫ كل التعاريف واذا كان فيه‬:‫بخصوص املقالي‬ disadvantages, limitations or types! And based on the concepts like: de ne deadlock and what we can do in case of deadlock avoidance or prevention “These types of topics “ MCQ‫التعاريف البسيطة نتعلمها لل‬ ‫اما التعاريف املهمة نعتبرها ك مقالي‬ fi Module 1 Chapter 1: Introduction Chapter 2: Operating-System Services Operating Systems Module 2 Chapter 3: Processes Chapter 4: Threads & Concurrency PROCESS STATES As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. New- The process is being created. Running- Instructions are being executed. Waiting- The process is waiting for some event to occur (such as an I/O completion or reception of a signal). Ready- The process is waiting to be assigned to a processor. Terminated- The process has finished execution. PROCESS STATES PROCESS CONTROL BLOCK Each process is represented in the operating system by a process control block (PCB)—also called a task control block. It contains many pieces of information associated with a specific process, including these: Process state- The state may be new, ready, running, waiting, halted, and so on. Program counter- The counter indicates the address of the next instruction to be executed for this process. CPU registers- The registers vary in number and type, depending on the computer architecture. PROCESS CONTROL BLOCK CPU-scheduling information: This information includes a process priority, pointers to scheduling queues, and any other scheduling parameters. Memory-management information: This information may include such items as the value of the base and limit registers and the page tables, or the segment tables. Accounting information: This information includes the amount of CPU and real time used, time limits, account numbers, job or process numbers, and so on. I/O status information: This information includes the list of I/O devices allocated to the process, a list of open files, and so on. CONTEXT SWITCH When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process via a context switch Context of a process represented in the PCB Context-switch time is pure overhead; the system does no useful work while switching The more complex the OS and the PCB the longer the context switch Time dependent on hardware support Some hardware provides multiple sets of registers per CPU multiple contexts loaded at once INTERPROCESS COMMUNICATION Processes executing concurrently in the operating system may be either independent processes or cooperating processes. A process is independent if it does not share data with any other processes executing in the system. A process is cooperating if it can affect or be affected by the other processes executing in the system. INTERPROCESS COMMUNICATION There are several reasons for providing an environment that allows process cooperation: Information sharing: Since several applications may be interested in the same piece of information (for instance, copying and pasting), we must provide an environment to allow concurrent access to such information. Computation speedup: If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others. Modularity: We may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads. INTERPROCESS COMMUNICATION Two models of IPC Shared memory Message passing INTERPROCESS COMMUNICATION (a) Shared memory. (b) Message passing. MULTICORE PROGRAMMING Types of parallelism: 1. Data parallelism – distributes subsets of the same data across multiple cores, same operation on each 2. Task parallelism – distributing threads across cores, each thread performing unique operation MULTITHREADING MODELS Many-to-One One-to-One Many-to-Many Operating Systems Module 3 Chapter 4: CPU Scheduling Comparison PREEMPTIVE AND NONPREEMPTIVE SCHEDULING  When scheduling takes place only under circumstances 1 and 4, the scheduling scheme is non-preemptive. Otherwise, it is preemptive.  Under Non-preemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases it either by terminating or by switching to the waiting state. Virtually all modern operating systems including Windows, MacOS, Linux, and UNIX use preemptive scheduling algorithms. DISPATCHER Dispatcher module gives control of the CPU to the process selected by the CPU scheduler; this involves: Switching context Switching to user mode Jumping to the proper location in the user program to restart that program Dispatch latency – time it takes for the dispatcher to stop one process and start another running SCHEDULING CRITERIA CPU utilization – keep the CPU as busy as possible (Max)  Throughput – # of processes that complete their execution per time unit (Max) Turnaround time – amount of time to execute a particular process (Min) Waiting time – amount of time a process has been waiting in the ready queue (Min) Response time – amount of time it takes from when a request was submitted until the first response is produced. (Min) FIRST- COME, FIRST-SERVED (FCFS) SCHEDULING Process Burst Time P1 24 P2 3 P3 3 Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: P1 P2 P3 0 24 27 30 Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 ROUND ROBIN (RR) Each process gets a small unit of CPU time (time quantum q), usually 10-100 milliseconds.  After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. “ No process waits more than (n-1)q time units”. Timer interrupts every quantum to schedule next process Performance q large FIFO q small q must be large with respect to context switch, otherwise overhead is too high EXAMPLE OF PRIORITY SCHEDULING ProcessA arri Burst TimeT Priority P1 10 3 P2 1 1 P3 2 4 P4 1 5 P5 5 2 Priority scheduling Gantt Chart Average waiting time = 8.2 Operating Systems Module 4 Chapter 6: Synchronization Tools CRITICAL SECTION PROBLEM Consider system of n processes {p0, p1, … pn-1} Each process has critical section segment of code Process may be changing common variables, updating table, writing file, etc. When one process in critical section, no other may be in its critical section Critical section problem is to design protocol to solve this Each process must ask permission to enter critical section in entry section, may follow critical section with exit section, then remainder section CRITICAL SECTION General structure of process Pi REQUIREMENT FOR CRITICAL SECTION 1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections 2. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the process that will enter the critical section next cannot be postponed indefinitely 3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted 4. Assume that each process executes at a nonzero speed 5. No assumption concerning relative speed of the n processes RACE CONDITION Processes P0 and P1 are creating child processes using the fork() system call Race condition on kernel variable next_available_pid which represents the next available process identifier (pid) Unless there is a mechanism to prevent P0 and P1 from accessing the variable next_available_pid the same pid could be assigned to two different processes! HARDWARE INSTRUCTIONS Special hardware instructions that allow us to either test-and- modify the content of a word, or to swap the contents of two words atomically (uninterruptedly.) Test-and-Set instruction Compare-and-Swap instruction The test_and_set Instruction Definition boolean test_and_set (boolean *target) { boolean rv = *target; *target = true; return rv: } Properties Executed atomically Returns the original value of passed parameter Set the new value of passed parameter to true MUTEX LOCK Previous solutions are complicated and generally inaccessible to application programmers OS designers build software tools to solve critical section problem Simplest is mutex lock Boolean variable indicating if lock is available or not Protect a critical section by First acquire() a lock Then release() the lock Calls to acquire() and release() must be atomic Usually implemented via hardware atomic instructions such as compare-and-swap. But this solution requires busy waiting This lock therefore called a spinlock SEMAPHORE Counting semaphore – integer value can range over an unrestricted domain Binary semaphore – integer value can range only between 0 and 1 Same as a mutex lock Can implement a counting semaphore S as a binary semaphore With semaphores we can solve various synchronization problems Operating Systems Module 5 Chapter 7: Synchronization Examples CLASSICAL PROBLEMS OF SYNCHRONIZATION Classical problems used to test newly-proposed synchronization schemes Bounded-Buffer Problem Readers and Writers Problem Dining-Philosophers Problem Operating Systems Module 6 Chapter 8: Deadlocks DEADLOCK CHARACTERIZATION Deadlock can arise if four conditions hold simultaneously. Mutual exclusion: only one process at a time can use a resource Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes No preemption: a resource can be released only voluntarily by the process holding it, after that process has completed its task Circular wait: there exists a set {P0, P1, …, Pn} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0. RESOURCE-ALLOCATION GRAPH A set of vertices V and a set of edges E. V is partitioned into two types: P = {P1, P2, …, Pn}, the set consisting of all the processes in the system R = {R1, R2, …, Rm}, the set consisting of all resource types in the system request edge – directed edge Pi Rj assignment edge – directed edge Rj Pi RESOURCE ALLOCATION GRAPH EXAMPLE One instance of R1 Two instances of R2 One instance of R3 Three instance of R4 T1 holds one instance of R2 and is waiting for an instance of R1 T2 holds one instance of R1, one instance of R2, and is waiting for an instance of R3 T3 is holds one instance of R3 METHODS FOR HANDLING DEADLOCKS Ensure that the system will never enter a deadlock state: Deadlock prevention Deadlock avoidance Allow the system to enter a deadlock state and then recover Ignore the problem and pretend that deadlocks never occur in the system. DEADLOCK PREVENTION Invalidate one of the four necessary conditions for deadlock: 1. Mutual Exclusion – not required for sharable resources (e.g., read-only files); must hold for non-sharable resources 2. Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold any other resources Require process to request and be allocated all its resources before it begins execution or allow process to request resources only when the process has none allocated to it. Low resource utilization; starvation possible DEADLOCK PREVENTION (CONT.) 3. No Preemption:  If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are released  Preempted resources are added to the list of resources for which the process is waiting  Process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting 4. Circular Wait: Impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration DEADLOCK AVOIDANCE Requires that the system has some additional a priori information available: Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes BANKER’S ALGORITHM Multiple instances of resources Each process must a priori claim maximum use When a process requests a resource, it may have to wait When a process gets all its resources it must return them in a finite amount of time EXAMPLE OF BANKER’S ALGORITHM 5 processes P0 through P4; 3 resource types: A (10 instances), B (5instances), and C (7 instances) Snapshot at time T0: Allocation Max Available ABC ABC ABC P0 010 753 332 P1 200 322 P2 302 902 P3 211 222 P4 002 433 EXAMPLE (CONT.) The content of the matrix Need is defined to be Max – Allocation Need ABC P0 743 P1 122 P2 600 P3 011 P4 431 The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies safety criteria RESOURCE-ALLOCATION GRAPH AND WAIT-FOR GRAPH Resource-Allocation Graph Corresponding wait-for graph Operating Systems Module 7 Chapter 9: Main Memory ADDRESS SPACE The concept of a logical address space that is bound to a separate physical address space is central to proper memory management Logical address – generated by the CPU; also referred to as virtual address Physical address – address seen by the memory unit Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme Logical address space is the set of all logical addresses generated by a program Physical address space is the set of all physical addresses generated by a program MEMORY MANAGEMENT UNIT (MMU) Hardware device that at run time maps virtual to physical address MEMORY MANAGEMENT UNIT (MMU) Consider simple scheme. which is a generalization of the base-register scheme. The base register now called relocation register The value in the relocation register is added to every address generated by a user process at the time it is sent to memory The user program deals with logical addresses; it never sees the real physical addresses Execution-time binding occurs when reference is made to location in memory Logical address bound to physical addresses MEMORY MANAGEMENT UNIT (MMU) CONTIGUOUS ALLOCATION Main memory must support both OS and user processes Limited resource, must allocate efficiently Contiguous allocation is one early method Main memory usually into two partitions: Resident operating system, usually held in low memory with interrupt vector User processes then held in high memory Each process contained in single contiguous section of memory CONTIGUOUS ALLOCATION  Relocation registers used to 1. protect user processes from each other 2. and from changing operating-system code and data  Base register contains value of smallest physical address  Limit register contains range of logical addresses – each logical address must be less than the limit register  MMU maps logical address dynamically Can then allow actions such as kernel code being transient and kernel changing size FRAGMENTATION External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous Internal Fragmentation – allocated memory may be slightly larger than requsted memory; this size difference is memory internal to a partition, but not being used First fit analysis reveals that given N blocks allocated, 0.5 N blocks lost to fragmentation 1/3 may be unusable -> 50-percent rule PAGING Physical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter is available Avoids external fragmentation Avoids problem of varying sized memory chunk Divide physical memory into fixed-sized blocks called frames Size is power of 2, between 512 bytes and 16 Mbytes Divide logical memory into blocks of same size called pages Keep track of all free frames To run a program of size N pages, need to find N free frames and load program Set up a page table to translate logical to physical addresses Backing store likewise split into pages Still have Internal fragmentation TRANSLATION LOOK-ASIDE BUFFER Some TLBs store address-space identifiers (ASIDs) in each TLB entry – uniquely identifies each process to provide address-space protection for that process. Otherwise need to flush at every context switch TLBs typically small (64 to 1,024 entries) On a TLB miss, value is loaded into the TLB for faster access next time Replacement policies must be considered Some entries can be wired down for permanent fast access Thank You

Use Quizgecko on...
Browser
Browser