OS-1st-4th-EXAM-NOTES.pdf

Full Transcript

OPERATING SYSTEMS The Steps in Copying a File from a CD to a Hard Disk − are at the heart of every computer. 1. Check for file on CD − a software that acts as an interface between 2. Check for...

OPERATING SYSTEMS The Steps in Copying a File from a CD to a Hard Disk − are at the heart of every computer. 1. Check for file on CD − a software that acts as an interface between 2. Check for file on hard disk—confirm overwrite computer hardware components and the user. 3. Create file name in hard disk directory − concerned with the allocation of resources and 4. Find space for file on hard disk services, such as memory, processors, devices, and 5. Read data sectors from CD information. 6. Write data sectors to hard disk − provides services to users and programmers that 7. Update hard disk directory make it possible to utilize a computer without having 8. Update hard drive space information to deal with the low-level, difficult-to-use hardware Terms & Concepts: commands. Operating System (or just System) User VS System View of an OS − a collection of one or more software modules that User View - pertains to how users or programs—programs manages and controls the resources of a computer are the main users of the OS— utilize the OS. or other computing or electronic device and gives Types of Users users and programs an interface to utilize these resources. 1. Application Users (or End Users) - this group Device - a piece of hardware connected to the main includes all of us, people who use (or run) computer system hardware. application or system programs. Device driver - A device driver is a software routine 2. Application Programmers - this group includes that is part of the OS, and is used to communicate the people who write application programs, such as with and control a device through its device word processors or email systems. controller. 3. Systems Programmers - These are the people Kernel - This term usually refers to that part of the who write software—either programs or OS that implements basic functionality and is components—that is closely tied to the OS. always present in memory. 4. Systems Administrators - This group includes the Service - functions that the OS kernel provides to people who manage computer facilities, and hence users, mostly through APIs via OS calls. are responsible for installing and upgrading the OS, Utility - programs that are not part of the OS core as well as other systems programs and utilities. (or kernel) but work closely with the kernel to System View - how the OS actually provides services. provide ease of use or access to system information. A shell or command interpreter is an Example: When the pointing device is moved, it generates a example of a utility. hardware event called an interrupt, which the OS handles. − The OS notes the movements of the mouse in terms of some hardware specific units the readings are in number of pulses generated. − low-level system view. The actual software reading the mouse movements is part of the OS and is called a mouse device driver. − Another part of the OS interprets it so that it can be converted into a higher-level system view, such as screen coordinates reflecting the mouse movements. (bigger) Example: Files Sometimes the most critical end user’s view of an OS is the file system—in particular, file names − In the application programmer’s view, the file system is a frequently used, critical part of the system. The system view of the file system is so large it is usually divided into subparts: 1. file naming and name manipulation (directory services) 2. file services such as locating and mapping a file name to its data (file allocation and storage) 3. trying to keep parts of open files in main memory to speed up access to its data (file buffering and caching), 4. and the actual management of the storage devices (disk scheduling). CPE 312 – OPERATING SYSTEMS |1st EXAM NOTES Layered View of an OS 4. Disk Scheduling 5. Network Management 6. Device Drivers Program VS Process A process is a program in execution Types of resources managed by an OS Process – Attributes/Characteristics CPU- The OS needs to schedule which process to − Process Id: A unique identifier assigned by the run on each CPU at any point in time. operating system − Process State: Can be ready, running, etc. The queue most relevant to CPU scheduling is called the − CPU registers: Like the Program Counter (CPU ready queue, which contains all processes that are ready to registers must be saved and restored when a execute. process is swapped in and out of CPU) − Accounts information: Amount of CPU used for Each process is typically given control of the CPU for a process execution, time limits, execution ID, etc. maximum period of time, called a time quantum − I/O status information: For example, devices If the time quantum expires before the process finishes allocated to the process, open files, etc execution, a timer interrupt would initiate an OS process − CPU scheduling information: For example, called context switching that would switch CPU control to Priority (Different processes may have different another process. priorities, for example a shorter process assigned high priority in the shortest job first scheduling) Main memory and caches-The OS needs to assign memory space to a process before it can Process – States execute. − New: Newly Created Process (or) being-created Secondary storage-Another important resource process. managed by the OS is secondary storage, which is typically hard disk. − Ready: After creation process moves to Ready state, i.e. the process is ready for execution. I/O devices-The OS must also control and manage the various input and output devices connected to a − Run: Currently running process in CPU (only one computer system. process at a time can be under execution in a single processor). File systems-The OS also manages higher-level resources that are created through software − Wait (or Block): When a process requests I/O access. User interfaces-Many modern OSs include another high-level component to handle user interaction. − Complete (or Terminated): The process completed its execution. Network access- allow users and programs on one computer to access other services and devices on a − Suspended Ready: When the ready queue computer network. becomes full, some processes are moved to suspended ready state Providing protection and security-The OS also provides mechanisms to protect the various − Suspended Block: When waiting queue becomes resources from unauthorized access, as well as full. security techniques to allow the system administrators to enforce their security policies. Major Modules of an OS Higher-Level Modules 1. Process management 2. File management 3. GUI Management 4. Security and Protection Lower-Level Modules 1. CPU Scheduling 2. Memory/Cache management 3. I/) Management CPE 312 – OPERATING SYSTEMS |1st EXAM NOTES Note: Only one program can fit into memory at a time, even if there is room to accommodate other waiting jobs. To allocate memory, the amount of work required from the operating system’s Memory Manager is minimal, as described in these steps: 1) Evaluate the incoming process to see if it is small enough to fit into the available space. If it is, load it into memory; if not, reject it and evaluate the next incoming process, 2) Monitor the occupied memory space. When the resident process ends its execution and no longer needs to be in memory, make the entire amount of main memory space available and return to Step 1, evaluating the next incoming process. Process – Schedulers Once the program is entirely loaded into memory, it begins 1. Long term scheduler - also known as job its execution and remains there until execution is complete, scheduler. either by inishing its work or through the intervention of 2. Short term scheduler- also known as CPU the operating system, such as when an error is detected. scheduler. 3. Medium term scheduler- takes care of the One major problem with this type of memory allocation swapped out processes scheme is that it doesn’t support multiprogramming (multiple jobs or processes occupying memory at the same Process – Queues time); it can handle only one at a time. 1. Job Queue - In starting, all the processes get stored in the job queue. It is maintained in the secondary memory. Fixed Partitions (static partitions) 2. Ready Queue - maintained in primary memory. - Multiprogramming 3. Waiting Queue - When the process needs some IO operation to complete its execution, OS changes memory is divided into fixed-size partitions at system the state of the process from running to waiting startup, with each partition capable of holding exactly one process. These partitions cannot be resized during operation. Chapter 2 – MEMORY MANAGEMENT Memory allocation involves assigning memory space to processes or applications in a computer system It involves selecting the appropriate memory allocation strategy, such as best-fit and first-fit allocation. 4 Types of Memory Allocation Schemes Single User Contiguous Fixed Partitions Dynamic Partitions Relocatable Dynamic Partitions within main memory—each partition could be assigned to Single User Contiguous Scheme one job. A system with four partitions could hold four jobs in simple memory management method where the entire memory at the same time. One fact remained the same, system's memory is allocated to just one user or process at a however—these partitions were static, so the systems time administrator had to turn off the entire system to reconfigure their sizes, and any job that couldn’t fit into the largest before execution can begin, each job or program is loaded in partition could not be executed. its entirety into memory and allocated as much contiguous space in memory as it needs If the program is too large to An important factor was introduced with this scheme: fit into the available memory space, it cannot begin protection of the job’s memory space. execution. Once a partition was assigned to a job, the jobs in other If a program doesn’t fit, then either the size of the main memory partitions had to be prevented from invading its memory must be increased, or the program must be modified boundaries, either accidentally or intentionally. This problem to fit, often by revising it to be smaller. of partition intrusion didn’t exist in single-user contiguous allocation schemes because only one job was present in main memory at any given time—only the portion of main CPE 312 – OPERATING SYSTEMS |1st EXAM NOTES memory that held the operating system had to be protected. However, for the fixed partition allocation schemes, protection was mandatory for each partition in main memory. Typically, this was the joint responsibility of the hardware of the computer and of the operating system. The algorithm used to store jobs in memory requires a few more steps than the one used for a single-user system because the size of the job must be matched with the size of the available partitions to make sure it fits completely. Remember, this scheme also required that the entire job be loaded into memory before execution could begin. To do so, the Memory Manager could perform these steps in a two-partition system: 1) Check the incoming job’s memory requirements. If it’s greater than the size of the As the jobs are loaded into the four fixed partitions, Job 3 largest partition, reject the job and go to the next must wait even though Partition 1 has 70K of available waiting job. If it’s less than the largest partition, go memory. Jobs are allocated space based on “first available to Step 2. partition of required size.” 2) Check the job size against the size of the first The fixed partition scheme works well if all the jobs that available partition. If the job is small enough to fit, run on the system are of similar size or if the sizes are see if that partition is free. If it is available, load the known ahead of time job into that partition. If it’s busy with another job, go to Step 3. On the other hand, if the partition sizes are too big, memory 3) Check the job size against the size of the is wasted. Any job that occupies less than the entire partition second available partition. If the job is small will cause unused memory in the partition to remain idle. enough to fit, check to see if that partition is free. If Remember that each partition is an indivisible unit that can it is available, load the incoming job into that be allocated to only one job at a time. partition. If not, go to Step 4. 4) Because neither partition is available now, place the This phenomenon of less-than-complete use of memory incoming job in the waiting queue for loading later. space in a fixed partition is called internal fragmentation Return to Step 1 to evaluate the next incoming job. (because it’s inside a partition) and is a major drawback to this memory allocation scheme. This partition scheme is more flexible than the single-user Dynamic Partitions scheme because it allows more than one program to be in memory at the same time. However, it still requires - the computer's memory is divided into partitions that the entire program be stored contiguously and in based on the needs of processes, rather than fixed memory from the beginning to the end of its execution. To sizes. allocate memory spaces to jobs, the Memory Manager must memory is allocated to an incoming job in one contiguous maintain a table which shows each memory partition’s size, block, and each job is given only as much memory as it its address, its access restrictions, and its current status (free requests when it is loaded for processing. Although this is a or busy). significant improvement over fixed partitions because memory is no longer wasted inside each partition, it introduces another problem. It works well when the first jobs are loaded. A dynamic partition scheme allocates memory efficiently as each of the first few jobs are loaded, but when those jobs finish and new jobs enter the system (which are not the same size as those that just vacated memory), the newer jobs are allocated When each resident job terminates, the status of its memory space in the available partition spaces on a priority basis. partition is changed from busy to free to make it available to an incoming job. first-come, first-served priority—that is, each job is loaded into the first available partition. Therefore, the subsequent allocation of memory creates fragments of free memory between partitions of allocated memory. This problem is called external fragmentation and, like internal fragmentation, allows memory to be wasted. CPE 312 – OPERATING SYSTEMS |1st EXAM NOTES Deallocation (release of memory space) - process of freeing up memory or resources that were previously allocated for use by a program or process. For fixed-partition system: Main memory use during dynamic partition allocation. Five - when the job is completed, the Memory Manager snapshots (a-e) of main memory as eight jobs are submitted immediately deallocates it by resetting the status of the for processing and allocated space based on “first come, first entire memory block from “busy” to “free.” served.” Job 8 must wait (e) even though there’s enough free For Dynamic Partition System: memory between partitions to accommodate it. - tries to combine free areas of memory whenever three free partitions of 5K, 10K, and 20K—35K in all— possible. Therefore, the system must be prepared enough to accommodate Job 8, which requires only 30K. for three alternative situations: However, because the three memory blocks are separated by partitions, Job 8 cannot be loaded in a contiguous 1) Case 1: When the block to be deallocated is manner. Therefore, this scheme forces Job 8 to wait. adjacent to another free block 2) Case 2: When the block to be deallocated is Two Ways That An Operating System Can Allocate Free between two free blocks Sections of Memory. 3) Case 3: When the block to be deallocated is isolated from other free blocks Best-Fit allocation Relocatable Dynamic Partitions - When a program needs memory, the best-fit algorithm searches through a list of available - the Memory Manager relocates programs to gather memory blocks and finds the smallest block that is all the empty blocks and compact them to make one large enough to fit the requested size. block of memory large enough to accommodate some or all of the jobs waiting to get in. The compaction of memory, sometimes referred to as memory defragmentation, is performed by the operating system to reclaim fragmented space. If you stopped lending books for a few moments and rearranged the books in the most effective order, you would be compacting your collection. But this demonstrates its disadvantage—this is an overhead process that can take place only while everything else waits. Compaction is not an easy task 1. First every program in the memory must be relocated so they are contiguous 2. Second, every address, and every reference to an address, within each program must be adjusted to First-Fit Allocation consider the program’s new location in the memory - when a program requests memory, the system allocates the first block that is large enough to meet the request. CPE 312 – OPERATING SYSTEMS |1st EXAM NOTES Here we show an example of an assembly language Demand Paging Memory Allocation program instruction. The instruction to add the integer 1 to I It is a memory management scheme that loads pages into is coded as: memory only when they are needed during program ADDI I, 1 execution, rather than loading the entire program at once. However, after it has been translated into actual code it could This approach allows for more efficient use of memory and look like this: enables programs to run even if they are larger than the 000007 271 01 0 00 000001 available physical memory. It is not immediately obvious which are addresses and which When executing a program, we can bring entire process into are instructions or data values memory at load time Or, with virtual memory, we can bring a page into memory only when it is needed (demand paging) - OS can tell the function of each group of digits by its location in the line and the operation code - Less I/O needed, no unnecessary I/O - However, if the program is to be moved to another Less memory needed place in the memory, each address must be Faster startup time identified (or flagged) With demand paging, there are three new fields for - The amount of memory displacement must also be each page in each PMT: added or subtracted to all the original addresses in the program If every address were not adjusted by 1) Determine if the page being requested is already the same value, the in memory program could: 2) Determine if the page contents have been 1. branch to the wrong section of the program modified while in memory 2. reference the wrong data 3) Determine if the page has been referenced most recently. Advantages Bounds Register - Efficient memory use - is used to store the highest location in memory - Reduced load time accessible by each program. This ensures that a - Flexibility program will not try to access memory locations that Disadvantages do not belong to it i.e. those out of bounds - Increased overhead - Internal fragmentation Relocation Register - Page faults - contains the value that must be added to each memory address referenced in the program (= 0 if Page Placement Policies and Concepts the program is not relocated) - Are essential algorithms used by OS to manage --------------------------------------------------------------------------------- memory efficiently when the available physical memory is full Virtual Memory (non-contiguous) - Determine which pages to remove from memory to make space for new pages balancing the tradeoff They remove the restriction of storing the programs between minimizing page faults and maximizing contiguously, and most of them eliminate the requirement system performance that the entire program reside in memory during its o First in first out (FIFO) execution. o Last recently used (LRU) Paged memory allocation o Clock replacement variation o Bit shifting - is based on the concept of dividing jobs into units of o The mechanics of paging equal size and each unit is called a page. o The working set - When a process is to be executed, its pages are FIFO loaded into any available memory frames from the - Used to manage resources such as cpu time, backing store. memory and input/output operations - When a page must be replaces, the oldest is - The backing store is divided into fixed sized blocks chosen that are of the same size as the memory frames. LRU (last recently used) - Divide physical memory into fixed-sized blocks called a frame. (size is in power of 2 ex: 512 bytes) - Lru placement links with each page the time of that pages last use. Internal fragmentation occurs in paged memory allocation - when a page must be replaced, LRU chooses the when the last page of a program is not fully utilized, leading page that has not been used the longest to wasted space within that page frame. Displacement (also called the offset) - a relative factor that’s used to locate one certain byte within its page frame CPE 312 – OPERATING SYSTEMS |1st EXAM NOTES CPE 312 – OPERATING SYSTEMS |1st EXAM NOTES task to another, it saves its current state to pick up where it left off later. Scheduling Algorithm: The method used by an operating system to decide which process to run at any given time.\ About Multi-Core Technologies -Multi-core CPUs, such as dual-core or quad-core, feature multiple processing units (cores) on a single chip. This design addresses issues from nano-sized transistors that cause heat and electrical loss when packed too closely together. By using multiple smaller cores instead of one large processor, these chips reduce heat and energy loss while allowing simultaneous processing of multiple tasks. Despite being the same size as single-core chips, multi-core CPUs improve performance and efficiency. Scheduling Sub Managers Job Scheduler -is a high-level component of the Processor Manager responsible for selecting jobs from a queue and placing them into the process queue based on job characteristics. Its main goal is to manage the order of jobs to optimize resource usage and system performance, balancing tasks that involve heavy I/O with those requiring significant computation. Processor Management Process Scheduler -handles tasks after they’ve been selected by the Job What is Processor manager? Scheduler. It decides which processes get CPU time, for how long, and what happens if processing is interrupted. It it is responsible for managing the activities of the central manages tasks within the READY queue, directing them processing unit (CPU) or other processors in a system through various queues during execution and recognizing when a job is finished. The Process Scheduler coordinates Overview the CPU by balancing tasks between CPU cycles and I/O In a simple system, one with a single user and one cycles, ensuring efficient execution of both I/O-bound and processor, the processor is busy only when it is executing CPU-bound jobs. In complex systems, it may also work with the user’s jobs or system software. However, when there are a middle-level scheduler to optimize job handling when the many users, such as in a multiprogramming environment, or system is overloaded. when there are multiple processes competing to be run by a Job and Process State single CPU, the processor must be allocated to each job in a fair and efficient manner. This can be a complex task, as we show in this chapter, which is devoted to single processor systems. CPU - performs calculations and executes programs Program – A file or set of instructions stored on a disk that isn’t active until executed. Process – An active instance of a program that requires resources like the CPU to run. It’s what the program turns Thread State into when it starts executing. Thread – A smaller part of a process that can be independently scheduled and executed. A process can have multiple threads, and each thread can run tasks concurrently. Multithreading – enables applications to manage multiple threads within a process, improving efficiency and responsiveness. Multiprogramming – The processor is allocated to different jobs or processes over time, ensuring each gets CPU time. If interrupted, the processor saves its state (context switch) so it can resume later without starting over. Interrupt: A signal that temporarily halts the current process so a higher priority task can be addressed. After the interrupt is handled, the processor returns to the original process. Context Switch: When the processor switches from one CPE 312 – OPERATING SYSTEMS |1st EXAM NOTES Control Blocks CPE 312 – OPERATING SYSTEMS |1st EXAM NOTES Process Management Deadlock - A situation where a set of processes are blocked because each process is holding a resource and waiting for another. - Occurs when processes are in a state of circular waiting. - Each process holds a resource and waits for another held by a different process. - This is a deadlock situation as neither process 1 or process 2 can make progress until one of the processes gives up its resource. Livelock Similar to deadlock, but the states of the processes involved constantly change with no progress. Starvation A process is perpetually denied necessary resources, leading to indefinite postponement CPE 312 – OPERATING SYSTEMS |1st EXAM NOTES CHAPTER 6 – CONCURRENT PROCESSES Processes VS Processors Levels of Multiprocessing Concurrent Processing  Is a computing model in which multiple processors execute instructions simultaneously for better performance.  Concurrent means, which occurs when something else happens.  The tasks are broken into subtypes, which are then assigned to different processors to perform Introduction to Multi-Core Processors simultaneously. Multi-Core Processor Parallel Processing Is a single chip component with two or more independent  Is when two or more CPUs execute instructions processing units called cores. These cores execute simultaneously and the instructions independently, simultaneously.  Processor Manager coordinate activities of each Intel Core i7 processor while synchronizing the cooperative interactions among all of them. AMD Ryzen 5  Example: In a factory, different assembly lines or Apple M1 machines work on different parts of a product at the same time, speeding up the overall production Qualcomm Snapdragon 888 process. Intel Xeon Parallel VS Concurrent Main Problems in Multi-Core Processors 1. Electrons Tunneling or Transistor Leak 2. Heat and Power Consumption Typical Multiprocessing Configurations 1. Master/Slave Configuration 2. Loosely Coupled Configuration 3. Symmetric Configuration Master/Slave Configuration Advantages of Parallel Process Is an asymmetric multiprocessing system. Think of it as a  Increased Reliability - If one processor fails, then single-processor system with additional slave processors, the others can continue to operate and absorb the each of which is managed by the primary master processor load.  Faster Processing - When instructions or data manipulation can be processed in parallel, two or more at a time. Some systems allocate a CPU to each program or job. Others allocate a CPU to each working set or parts of it. Real-Life Example of Parallel Processing 1. Consult the list of jobs to see which one should be run next. 2. Retrieve the job for execution. 3. Increment the READY list to the next job. 4. Execute it. The common element in all synchronization schemes is to allow a process to finish work on a critical part of the program before other processes have access to it. Critical Region - a part of a program that must complete execution before other processes can Loosely Coupled Configuration begin. Is called loosely coupled because each processor controls its Lock-and-key arrangement – A process must get own resources—its own files, access to memory, and its own the key before it can work on a critical region. This I/O devices—and that means that each processor maintains locks out the other process until it finishes. Then, its own commands and I/O management tables. unlocks the entry and returns the key so that other process can get the key and begin their work. Locking Mechanisms Test-and-set WAIT and SIGNAL Semaphores Test-and-Set Test-and-set is a single, indivisible machine instruction known simply as TS and was introduced by IBM for its early multiprocessing computers. It tests to see if the key is available and, if it is, sets it to Symmetric Configuration unavailable. Features decentralized processor scheduling. That is, a single The actual key is a single bit in a storage location: copy of the operating system and a global table listing each 0 - Free process and its status is stored in a common area of memory so every processor has access to it. 1 - Busy Disadvantages Process Synchronization Software 1. When many processes are waiting to enter a critical region, starvation can occur because the processes Process Synchronization gain access in an arbitrary fashion. The need for algorithms to resolve conflicts between 2. The waiting processes remain in unproductive, processors in a multiprocessing environment; or resource-consuming wait loops, requiring context The need to ensure that events occur in the proper order even switching, because the processes repeatedly check if they are carried out by several processes. for the key. This is known as busy waiting. The success of process synchronization hinges on the WAIT and SIGNAL capability of the operating system to make a resource Is a modification of test-and-set that’s designed to remove unavailable to other processes while it is being used by one of busy waiting. them. Two new operations, WAIT and SIGNAL, are mutually Consider the scenario in which Processor 1 and Processor 2 exclusive and become part of the process scheduler’s set of finish with their current jobs at the same time. To run the next operations. job, each processor must: WAIT sets the process to a blocked state and links it to the queue of processes waiting to enter this particular critical region. SIGNAL is activated when a process exits the critical region and the condition code is set to “free.” Semaphores In an operating system, a semaphore is set to either zero or one to perform a similar function: It signals if and when a resource is free and can be used by a process. Edsger Dijkstra (1965) introduced two operations to overcome the process synchronization problem we’ve discussed. Producer and Consumer Problem The operations are called: The cook produces hamburgers that are sent to the bagger, who consumes them. P- Proberen (to test) Both processors (cook and bagger) have access to V- Verhogen (to increment) one common area, the hamburger bin, which can Here’s how semaphores works: hold only a finite number of hamburgers (the bin is essentially a buffer). If we let s be a semaphore variable, then the V operation on s is simply to increment s by 1. The action can be stated as: V(s): s = s + 1 The operation P on s is to test the value of s, and if it’s not 0, to decrement it by 1. The action can be stated as: P(s): If s > 0 then s = s – 1 If s = 0, it means that the critical region is busy. The process calling on the test operation must wait until the operation can be executed which is when s > 0. S=0 -> Unavailable S=1 -> Free How do buffers work? The buffer functions to accommodate the two different speeds, we need a buffer where the producer can temporarily store data that can be retrieved by the consumer at a slower speed, freeing the CPU from having to wait unnecessarily. Process Cooperation There are occasions when several processes work directly together to complete a common task. Two famous examples are the problems of: producers and consumers Reader and Writer Problem readers and writers This problem arises when two types of processes (reader and writer) need to access a shared resource Each case requires both mutual exclusion and such as a file or database. synchronization, and each is implemented by using semaphores. The system can’t allow someone to be writing while someone else is reading or writing to the exact same file. It must enforce mutual exclusion for any and all writers. Courtois, Heymans, and Parnas offered two solutions: Readers over writers. Readers are kept waiting only if a writer is actually modifying the data. However, if there is a continuous stream of readers, this policy results in writer starvation. Writers over readers. As soon as a writer arrives, any readers that are already active are allowed to finish The four classifications of Flynn’s Taxonomy for machine processing, but all additional readers are put on hold structures. until the writer is done. Obviously, if a continuous stream of writers is present, this policy results in Michael Flynn (1972) published what’s now known as Flynn’s reader starvation. taxonomy, describing machine structures that fall into four main classifications of parallel construction with combinations - Either scenario is unacceptable. of single/multiple instruction and single/multiple data. Each category presents different opportunities for parallelism. Each category presents different opportunities for How can we prevent reader/writer starvation? parallelism. Hoare (1974) proposed the following combination 1. The single instruction, single data (SISD) priority policy: When a writer is finished, any and all classification is represented by systems with a single readers who are waiting or on hold are allowed to processor and a single stream of data which read. Then, when that group of readers is finished, presents few if any opportunities for parallelism. the writer who is on hold can begin; any new readers who arrive in the meantime aren’t allowed to start 2. The multiple instructions, single data (MISD) until the writer is finished. classification includes systems with multiple processors (which might allow some level of parallelism) and a single stream of data. Configurations such as this might allow instruction level parallelism but little or no data level parallelism without additional software assistance. 3. The single instruction, multiple data (SIMD) classification is represented by systems with a single processor and a multiple data streams. 4. The multiple instructions, multiple data (MIMD) classification is represented by systems with a multiple processors and a multiple data streams. These systems may allow the most creative use of both instruction level parallelism and data level parallelism. Amdahl’s Law Concurrent Programming Gene Amdahl proposed in 1967 that the limits of parallel processors could be calculated, as shown in Multiprocessing can also refer to one job using the figure. several processors to execute sets of instructions in parallel. Speedup = 1 / (1 - P + P/N) The concept isn’t new, but it requires a where: programming language and a computer system that can support this type of Speedup: The maximum theoretical speedup achievable. construct. P: The fraction of the program that can be parallelized. This type of system is referred to as a concurrent processing system, and it can generally perform N: The number of processors. data level parallelism and instruction (or task) level parallelism. Categories of Parallel Systems Data level parallelism (DLP), which refers to systems that can perform on one or more streams or elements of data Instruction (or task) level parallelism (ILP), which refers to systems that can perform multiple instructions in parallel Z = T4 + C(T5) Explicit vs Implicit Parallelism 4 Cases of Operations Concurrent processing can dramatically reduce the complexity of working with: 1. Array Operations 2. Matrix Multiplication 3. Searching Databases Notice that all four graphs level off and there is no speed 4. Sorting and Merging Files difference even though the number of processors increased Case 1: Array Operations from 2,048 to 65,536 Let’s say we need to perform an instruction on the objects in Concurrent Processing in Operations an array. Order of Operations To do so, the code might say this: To assure that every equation results in the exactly the same for(j = 1; j

Use Quizgecko on...
Browser
Browser