OS in 6 hours.pdf
Document Details
Uploaded by InexpensiveCentaur
West Herts College
Tags
Full Transcript
Syllabus(Semester Exam) Unit – I Introduction : Operating system and functions, Classification of Operating systems- Batch, Interactive, Time sharing, Real Time System, Multiprocessor Systems, Multiuser Systems, Multiprocess Systems, Multithreaded Systems, O...
Syllabus(Semester Exam) Unit – I Introduction : Operating system and functions, Classification of Operating systems- Batch, Interactive, Time sharing, Real Time System, Multiprocessor Systems, Multiuser Systems, Multiprocess Systems, Multithreaded Systems, Operating System Structure- Layered structure, System Components, Operating System services, Reentrant Kernels, Monolithic and Microkernel Systems. Unit – II CPU Scheduling: Scheduling Concepts, Performance Criteria, Process States, Process Transition Diagram, Schedulers, Process Control Block (PCB), Process address space, Process identification information, Threads and their management, Scheduling Algorithms, Multiprocessor Scheduling. Deadlock: System model, Deadlock characterization, Prevention, Avoidance and detection, Recovery from deadlock. Unit – III Concurrent Processes: Process Concept, Principle of Concurrency, Producer / Consumer Problem, Mutual Exclusion, Critical Section Problem, Dekker’s solution, Peterson’s solution, Semaphores, Test and Set operation; Classical Problem in Concurrency- Dining Philosopher Problem, Sleeping Barber Problem; Inter Process Communication models and Schemes, Process generation. Unit – IV Memory Management: Basic bare machine, Resident monitor, Multiprogramming with fixed partitions, Multiprogramming with variable partitions, Protection schemes, Paging, Segmentation, Paged segmentation, Virtual memory concepts, Demand paging, Performance of demand paging, Page replacement algorithms, Thrashing, Cache memory organization, Locality of reference. Unit – V I/O Management and Disk Scheduling: I/O devices, and I/O subsystems, I/O buffering, Disk storage and disk scheduling, RAID. File System: File concept, File organization and access mechanism, File directories, and File sharing, File system Knowledge Gate Website implementation issues, File system protection and security. Chapters of This Video (Chapter-1: Introduction)- Operating system, Goal & functions, System Components, Operating System services, Classification of Operating systems- Batch, Interactive, Multiprogramming, Multiuser Systems, Time sharing, Multiprocessor Systems, Real Time System. (Chapter-2: Operating System Structure)- Layered structure, Monolithic and Microkernel Systems, Interface, System Call. Chapter-3: Process Basics)- Process Control Block (PCB), Process identification information, Process States, Process Transition Diagram, Schedulers, CPU Bound and i/o Bound, Context Switch. (Chapter-4: CPU Scheduling)- Scheduling Performance Criteria, Scheduling Algorithms. (Chapter-5: Process Synchronization)- Race Condition, Critical Section Problem, Mutual Exclusion,, Dekker’s solution, Peterson’s solution, Process Concept, Principle of Concurrency, (Chapter-6: Semaphores)- Classical Problem in Concurrency- Producer/Consumer Problem, Reader-Writer Problem, Dining Philosopher Problem, Sleeping Barber Problem, Test and Set operation. (Chapter-7: Deadlock)- System model, Deadlock characterization, Prevention, Avoidance and detection, Recovery from deadlock. (Chapter-8)- Fork Command, Multithreaded Systems, Threads and their management (Chapter-9: Memory Management)- Memory Hierarchy, Locality of reference, Multiprogramming with fixed partitions, Multiprogramming with variable partitions, Protection schemes, Paging, Segmentation, Paged segmentation. (Chapter-10: Virtual memory)- Demand paging, Performance of demand paging, Page replacement algorithms, Thrashing. Chapter-11: Disk Management)- Disk Basics, Disk storage and disk scheduling, Total Transfer time. (Chapter-12: File System)- File allocation Methods, Free-space Management, File organization and access mechanism, Knowledge Gate Website File directories, and File sharing, File system implementation issues, File system protection and security. What is Operating System 1. Intermediatory – Acts as an intermediary between user & h/w. Knowledge Gate Website What is Operating System 2. Resource Manager/Allocator – Operating system controls and coordinates the use of system resources among various application programs in an unbiased fashion. Knowledge Gate Website What is Operating System 3. Platform - OS provides platform on which other application programs can be installed, provides the environment within which programs are executed. Knowledge Gate Website Example Knowledge Gate Website Goals and Functions of operating system Goals are the ultimate destination, but we follow functions to implement goals. सबका साथ सबका विकास Knowledge Gate Website Goals of operating system 1. Primary goals (Convenience / user friendly) 2. Secondary goals (Efficiency (Using resources in efficient manner) / Reliability / maintainability) Knowledge Gate Website Functions of operating system 1. Process Management: Involves handling the creation, scheduling, and termination of processes, which are executing programs. 2. Memory Management: Manages allocation and deallocation of physical and virtual memory spaces to various programs. 3. I/O Device Management: Handles I/O operations of peripheral devices like disks, keyboards, etc., including buffering and caching. 4. File Management: Manages files on storage devices, including their information, naming, permissions, and hierarchy. 5. Network Management: Manages network protocols and functions, enabling the OS to establish network connections and transfer data. 6. Security & Protection: Ensures system protection against unauthorized access and other security threats through authentication, authorization, and encryption. Knowledge Gate Website Major Components of operating system 1. Kernel Central Component: Manages the system's resources and communication between hardware and software. 2. Process Management Process Scheduler: Determines the execution of processes. Process Control Block (PCB): Contains process details such as process ID, priority, status, etc. Concurrency Control: Manages simultaneous execution. 3. Memory Management Physical Memory Management: Manages RAM allocation. Virtual Memory Management: Simulates additional memory using disk space. Memory Allocation: Assigns memory to different processes. 4. File System Management File Handling: Manages the creation, deletion, and access of files and directories. File Control Block: Stores file attributes and control information. Disk Scheduling: Organizes the order of reading or writing to disk. Knowledge Gate Website 5. Device Management Device Drivers: Interface between the hardware and the operating system. I/O Controllers: Manage data transfer to and from peripheral devices. 6. Security and Access Control Authentication: Verifies user credentials. Authorization: Controls access permissions to files and directories. Encryption: Ensures data confidentiality and integrity. 7. User Interface Command Line Interface (CLI): Text-based user interaction. Graphical User Interface (GUI): Visual, user-friendly interaction with the OS. 8. Networking Network Protocols: Rules for communication between devices on a network. Network Interface: Manages connection between the computer and the network. Knowledge Gate Website Batch Operating System 1. Early computers were not interactive device, there user use to prepare a job which consist three parts 1. Program 2. Control information 3. Input data 2. Only one job is given input at a time as there was no memory, computer will take the input then process it and then generate output. 3. Common input/output device were punch card or tape drives. So these devices were very slow, and processor remain ideal most of the time. Knowledge Gate Website Batch Operating System 4. To speed up the processing job with similar types (for e.g. FORTRAN jobs, COBOL jobs etc. ) were batched together and were run through the processor as a group (batch). 5. In some system grouping is done by the operator while in some systems it is performed by the 'Batch Monitor' resided in the low end of main memory) 6. Then jobs (as a deck of punched cards) are bundled into batches with similar requirement. Knowledge Gate Website Spooling Simultaneous peripheral operations online 1. In a computer system input-output devices, such as printers are very slow relative to the performance of the rest of the system. 2. Spooling is a process in which data is temporarily held in memory or other volatile storage to be used by a device or a program. Knowledge Gate Website 3. The most common implementation of spooling can be found in typical input/output devices such as the keyboard, mouse and printer. For example, in printer spooling, the documents/files that are sent to the printer are first stored in the memory. Once the printer is ready, it fetches the data and prints it. Knowledge Gate Website 4. Ever had your mouse or keyboard freeze briefly? We often click around to test if it's working. When it unfreezes, all those stored clicks execute rapidly due to the device's spool. Knowledge Gate Website Multiprogramming Operating System Multiple Jobs: Keeps several jobs in main memory simultaneously, allowing more efficient utilization of the CPU. Job Execution: The OS picks and begins to execute one of the jobs in memory. Waiting Jobs: Eventually, a job may need to wait for a task, such as an I/O operation, to complete. वकसी के विए Processor Knowledge wait Gate नहीीं करे गा Website Non-Multiprogrammed: CPU sits idle while waiting for a job to complete. Multiprogrammed: The OS switches to and executes another job if the current job needs to wait, utilizing the CPU effectively. Conclusion Show must go on Efficient Utilization: Ensures that the CPU is never idle as long as at least one job needs to execute, leading to better utilization of resources. Knowledge Gate Website Advantages: High CPU Utilization: Enhances processing efficiency. Less Waiting Time: Minimizes idle time. Multi-Task Handling: Manages concurrent tasks effectively. Shared CPU Time: Increases system efficiency. Disadvantages: Complex Scheduling: Difficult to program. Complex Memory Management: Intricate handling of memory is required. Knowledge Gate Website Multitasking Operating system/time sharing/Multiprogramming with Round Robin/ Fair Share 1. Time sharing (or multitasking) is a logical extension of multiprogramming, it allows many users to share the computer simultaneously. the CPU executes multiple jobs (May belong to different user) by switching among them, but the switches occur so frequently that, each user is given the impression that the entire computer system is dedicated to his/her use, even though it is being shared among many users. 2. In the modern operating systems, we are able to play MP3 music, edit documents in Microsoft Word, surf the Google Chrome all running at the same time. (by context switching, the illusion of parallelism is achieved) 3. For multitasking to take place, firstly there should be multiprogramming i.e. presence of multiple programs ready for execution. And secondly the concept of time sharing. Knowledge Gate Website Knowledge Gate Website Multiprocessing Operating System/ tightly coupled system 1. Multiprocessor Operating System refers to the use of two or more central processing units (CPU) within a single computer system. These multiple CPU’s share system bus, memory and other peripheral devices. 2. Multiple concurrent processes each can run on a separate CPU, here we achieve a true parallel execution of processes. 3. Becomes most important in computer system, where the complexity of the job is more, and CPU divides and conquers the jobs. Generally used in the fields like artificial intelligence and expert system, image processing, weather forecasting etc. Knowledge Gate Website Point Symmetric Processing Asymmetric Processing All processors are treated equally Each processor is assigned a Definition and can run any task. specific task or role. Task Any processor can perform any Tasks are divided according to Allocation task. processor roles. Generally simpler as all processors More complex due to the Complexity are treated the same. dedicated role of each processor. Easily scalable by adding more May require reconfiguration as Scalability processors. processors are added. Load is evenly distributed, Performance may vary based on Performance enhancing performance. the specialization of tasks. Knowledge Gate Website Point Multi-Programming Multi-Processing Utilizes multiple CPUs to run Definition Allows multiple programs to share a single CPU. multiple processes concurrently. Simulates concurrent execution by rapidly Achieves true parallel execution of Concurrency switching between tasks. processes. Enhances performance by allowing Maximizes CPU utilization by keeping it busy with Resource Utilization tasks to be processed different tasks. simultaneously. Hardware Requires only one CPU and manages multiple Requires multiple CPUs, enabling Requirements tasks on it. parallel processing. Complexity and Less complex, primarily managing task switching More complex, requiring Coordination on one CPU. coordination among multiple CPUs. Knowledge Gate Website Real time Operating system 1. A real time operating system is a special purpose operating system which has well defined fixed time constraints. Processing must be done within the defined time limit or the system will fail. 2. Valued more for how quickly or how predictably it can respond, without buffer delays than for the amount of work it can perform in a given period of time. 3. For example, a petroleum refinery, Airlines reservation system, Air traffic control system, Systems that provide up to the minute information on stock prices, Defense application systems like as RADAR. Knowledge Gate Website Hard real-time operating system - This is also a type of OS and it is predicted by a deadline. The predicted deadlines will react at a time t = 0. Some examples of this operating system are air bag control in cars. Knowledge Gate Website Soft real-time operating system - The soft real-time operating system has certain deadlines, may be missed and they will take the action at a time t=0+. The critical time of this operating system is delayed to some extent. The examples of this operating system are the digital camera, mobile phones and online data etc. Knowledge Gate Website Point Hard Real-Time Operating System Soft Real-Time Operating System Must meet strict deadlines Can miss deadlines occasionally Deadline Constraints without fail. without failure. Response Time Fixed and guaranteed. Predictable, but not guaranteed. Used in life-critical systems like Used in multimedia, user Applications medical devices, nuclear interfaces, etc. reactors. Typically more complex and Less complex and usually less Complexity and Cost costlier. expensive. Must be highly reliable and High reliability desired, but some Reliability fault-tolerant. failures are tolerable. Knowledge Gate Website Distributed OS 1. A distributed operating system is a software over a collection of independent, networked, communicating, loosely coupled nodes and physically separate computational nodes. 2. The nodes communicate with one another through various networks, such as high-speed buses and the Internet. They handle jobs which are serviced by multiple CPUs. Each individual node holds a specific software subset of the global aggregate operating system. 3. There are four major reasons for building distributed systems: resource sharing, computation speedup, reliability, and communication. Knowledge Gate Website Structure of Operating System A common approach is to partition the task into small components, or modules, rather than have one monolithic system. Each of these modules should be a well-defined portion of the system, with carefully defined inputs, outputs, and functions. Simple Structure - Many operating systems do not have well-defined structures. Frequently, such systems started as small, simple, and limited systems and then grew beyond their original scope. MS-DOS is an example of such a system. Not divided into modules. Its interface , levels and functionality are not well separated Knowledge Gate Website Layered Approach - With proper hardware support, operating systems can be broken into pieces. The operating system can then retain much greater control over the computer and over the applications that make use of that computer. 1. Implementers have more freedom in changing the inner workings of the system and in creating modular operating systems. 2. Under a top-down approach, the overall functionality and features are determined and are separated into components. 3. Information hiding is also important, because it leaves programmers free to implement the low-level routines as they see fit. 4. A system can be made modular in many ways. One method is the layered approach, in which the operating system is broken into a number of layers (levels). The bottom layer (layer 0) is the hardware; the highest (layer N) is the user interface. Knowledge Gate Website Micro-Kernel approach In the mid-1980s, researchers at Carnegie Mellon University developed an operating system called Mach that modularized the kernel using the microkernel approach. This method structures the operating system by removing all nonessential components from the kernel and implementing them as system and user-level programs. The result is a smaller kernel. Knowledge Gate Website One benefit of the microkernel approach is that it makes extending the operating system easier. All new services are added to user space and consequently do not require modification of the kernel. When the kernel does have to be modified, the changes tend to be fewer, because the microkernel is a smaller kernel. The MINIX 3 microkernel, for example, has only approximately 12,000 lines of code. Developer Andrew S. Tanenbaum Knowledge Gate Website User and Operating-System Interface There are several ways for users to interface with the operating system. Here, we discuss two fundamental approaches. Command-line interface, or command interpreter. Graphical User Interfaces. ? Operating System Knowledge Gate Website Command Interpreters - Some operating systems include the command interpreter in the kernel. Others, such as Windows and UNIX, treat the command interpreter as a special program that is running when a job is initiated or when a user first logs on (on interactive systems). On systems with multiple command interpreters to choose from, the interpreters are known as shells. For example, on UNIX and Linux systems, a user may choose among several different shells, including the Bourne shell, C shell, Bourne-Again shell, Korn shell, and others. Knowledge Gate Website Graphical User Interfaces - A second strategy for interfacing with the operating system is through a user- friendly graphical user interface, or GUI. Here, users employ a mouse-based window- and-menu system characterized by a desktop. The user moves the mouse to position its pointer on images, or icons, on the screen (the desktop) that represent programs, files, directories, and system functions. Depending on the mouse pointer’s location, clicking a button on the mouse can invoke a program, select a file or directory—known as a folder —or pull down a menu that contains commands. Knowledge Gate Website Because a mouse is impractical for most mobile systems, smartphones and handheld tablet computers typically use a touchscreen interface. Here, users interact by making gestures on the touchscreen—for example, pressing and swiping fingers across the screen. Knowledge Gate Website The choice of whether to use a command-line or GUI interface is mostly one of personal preference. System administrators who manage computers and power users who have deep knowledge of a system frequently use the command-line interface. For them, it is more efficient, giving them faster access to the activities they need to perform. Indeed, on some systems, only a subset of system functions is available via the GUI, leaving the less common tasks to those who are command-line knowledgeable. Knowledge Gate Website System call System calls provide the means for a user program to ask the operating system to perform tasks reserved for the operating system on the user program’s behalf. System calls provide an interface to the services made available by an operating system. These calls are generally available as routines written in C and C++. The API specifies a set of functions that are available to an application programmer, including the parameters that are passed to each function and the return values the programmer can expect. Knowledge Gate Website Knowledge Gate Website Types of System Calls - System calls can be grouped roughly into six major categories: process control, file manipulation, device manipulation, information maintenance, communications, and protection. Process control 1. end, abort 2. load, execute 3. create process, terminate process 4. get process attributes, set process attributes 5. wait for time 6. wait event, signal event 7. allocate and free memory Knowledge Gate Website File management 1. create file, delete file 2. open, close 3. read, write, reposition 4. get file attributes, set file attributes Knowledge Gate Website Device management 1. request device, release device 2. read, write, reposition 3. get device attributes, set device attributes 4. logically attach or detach devices Knowledge Gate Website Information maintenance 1. get time or date, set time or date 2. get system data, set system data 3. get process, file, or device attributes 4. set process, file, or device attributes Knowledge Gate Website Communications 1. create, delete communication connection 2. send, receive messages transfer status information Knowledge Gate Website Mode We need two separate modes of operation: User mode and Kernel mode (also called supervisor mode, system mode, or privileged mode). A bit, called the mode bit, is added to the hardware of the computer to indicate the current mode: kernel (0) or user (1). When the computer system is executing on behalf of a user application, the system is in user mode. However, when a user application requests a service from the operating system (via a system call), the system must transition from user to kernel mode to fulfill the request. Knowledge Gate Website User Mode Kernel Mode Knowledge Gate Website Knowledge Gate Website Process In general, a process is a program in execution. A Program is not a Process by default. A program is a passive entity, i.e. a file containing a list of instructions stored on disk (secondary memory) (often called an executable file). A program becomes a Process when an executable file is loaded into main memory and when it’s PCB is created. A process on the other hand is an Active Entity, which require resources like main memory, CPU time, registers, system buses etc. Knowledge Gate Website Even if two processes may be associated with same program, they will be considered as two separate execution sequences and are totally different process. For instance, if a user has invoked many copies of web browser program, each copy will be treated as separate process. even though the text section is same but the data, heap and stack sections can vary. Knowledge Gate Website A Process consists of following sections: Text section: Also known as Program Code. Stack: Which contains the temporary data (Function Parameters, return addresses and local variables). Data Section: Containing global variables. Heap: Which is memory dynamically allocated during process runtime. Knowledge Gate Website Point Program Process A set of instructions written to perform a Definition specific task. An instance of a program being executed. Static; exists as code on disk or in Dynamic; exists in memory and has a state State storage. (e.g., running, waiting). Does not require system resources when Requires CPU time, memory, and other Resources not running. resources during execution. Exists independently and is not Can operate concurrently with other Independence executing. processes. Can interact with other processes and the Does not interact with other programs or Interaction the system. operating system through system calls and inter-process communication. Knowledge Gate Website Process Control Block (PCB) Each process is represented in the operating system by a process control block (PCB) — also called a task control block. PCB simply serves as the repository for any information that may vary from process to process. It contains many pieces of information associated with a specific process, including these: 1. Process state: The state may be new, ready, running, waiting, halted, and so on. 2. Program counter: The counter indicates the address of the next instruction to be executed for this process. 3. CPU registers: The registers vary in number and type, depending on the computer architecture. They include accumulators, index registers, stack pointers, and general-purpose registers, plus any condition-code information. Along with the program counter, this state information must be saved when an interrupt occurs, to allow the process to be continued correctly afterward. Knowledge Gate Website 4. CPU-scheduling information: This information includes a process priority, pointers to scheduling queues, and any other scheduling parameters. 5. Memory-management information: This information may include such items as the value of the base and limit registers and the page tables, or the segment tables, depending on the memory system used by the operating system. 6. Accounting information: This information includes the amount of CPU and real time used, time limits, account numbers, job or process numbers, and so on. 7. I/O status information: This information includes the list of I/O devices allocated to the process, a list of open files, and so on. Knowledge Gate Website Knowledge Gate Website Process States A Process changes states as it executes. The state of a process is defined in parts by the current activity of that process. A process may be in one of the following states: New: The process is being created. Running: Instructions are being executed. Waiting (Blocked): The process is waiting for some event to occur (such as an I/O completion or reception of a signal). Ready: The process is waiting to be assigned to a processor. Terminated: The process has finished execution. Knowledge Gate Website Schedulers Schedulers: A process migrates among the various scheduling queues throughout its lifetime. The operating system must select, for scheduling purposes, processes from these queues in some fashion. The selection process is carried out by the appropriate scheduler. Types of Schedulers Long Term Scheduler (LTS)/Spooler: Long-term schedulers determine which processes enter the ready queue from the job pool. Operating less frequently than short-term schedulers, they focus on long-term system goals such as maximizing throughput. Medium-term scheduler: The medium-term scheduler swaps processes in and out of memory to optimize CPU usage and manage memory allocation. By doing so, it adjusts the degree of multiprogramming and frees up memory as needed. Swapping allows the system to pause and later resume a process, improving overall system efficiency. Short Term Scheduler (STS): The short-term scheduler, or CPU scheduler, selects from among the processes that are ready to execute and allocates the CPU to one of them. Knowledge Gate Website Point Long-Term Scheduler Short-Term Scheduler Middle Scheduler Adjusts the degree of Controls the admission of new Selects which ready process multiprogramming, moving Function processes into the system. will execute next. processes between ready and waiting queues. Executes infrequently as it Executes frequently to Executes at an intermediate Frequency deals with the admission of rapidly switch between frequency, balancing long-term new processes. processes. and short-term needs. Determines which programs Controls the mix of CPU-bound Manages CPU scheduling and Responsibility are admitted to the system and I/O-bound processes to the switching of processes. from the job pool. optimize throughput. Influences overall system Directly impacts CPU Balances system load to Impact on System performance and degree of utilization and response prevent resource bottlenecks Performance multiprogramming. time. or idle resources. Makes decisions considering Makes decisions based on Makes decisions based on both short-term and long-term Decision Making long-term goals like system short-term goals like goals, optimizing resource throughput. minimizing response time. allocation. Knowledge Gate Website Dispatcher - The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler. This function involves the following: Switching context, switching to user mode, jumping to the proper location in the user program to restart that program. The dispatcher should be as fast as possible, since it is invoked during every process switch. The time it takes for the dispatcher to stop one process and start another running is known as the dispatch latency. Knowledge Gate Website CPU Bound and I/O Bound Processes A process execution consists of a cycle of CPU execution or wait and i/o execution or wait. Normally a process alternates between two states. Process execution begin with the CPU burst that may be followed by a i/o burst, then another CPU and i/o burst and so on. Eventually in the last will end up on CPU burst. So, process keep switching between the CPU and i/o during execution. I/O Bound Processes: An I/O-bound process is one that spends more of its time doing I/O than it spends doing computations. CPU Bound Processes: A CPU-bound process, generates I/O requests infrequently, using more of its time doing computations. It is important that the long-term scheduler select a good process mix of I/O-bound and CPU-bound processes. If all processes are I/O bound, the ready queue will almost always be empty, and the short-term scheduler will have little to do. Similarly, if all processes are CPU bound, the I/O waiting queue will almost always be empty, devices will go unused, and again the system will be unbalanced. Knowledge Gate Website Context Switch Switching the CPU to another process requires performing a state save of the current process and a state restore of a different process. This task is known as a context switch. When a context switch occurs, the kernel saves the context of the old process in its PCB and loads the saved context of the new process scheduled to run. Context-switch time is pure overhead, because the system does no useful work while switching. Knowledge Gate Website Knowledge Gate Website Sector-78 Rohtak Sadar bazar Grand father paper industry machenical May-2023, Cacha 1 km away Elder brother marrage in Knowledge Gate Website Knowledge Gate Website CPU Scheduling 1. CPU scheduling is the process of determining which process in the ready queue is allocated to the CPU. 2. Various scheduling algorithms can be used to make this decision, such as First- Come-First-Served (FCFS), Shortest Job Next (SJN), Priority and Round Robin (RR). 3. Different algorithm support different class of process and favor different scheduling criterion. Knowledge Gate Website Type of scheduling Non-Pre-emptive: Under Non-Pre-emptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU willingly. A process will leave the CPU only 1. When a process completes its execution (Termination state) 2. When a process wants to perform some i/o operations(Blocked state) Knowledge Gate Website Pre-emptive Under Pre-emptive scheduling, once the CPU has been allocated to a process, A process will leave the CPU willingly or it can be forced out. So it will leave the CPU 1. When a process completes its execution 2. When a process leaves CPU voluntarily to perform some i/o operations 3. If a new process enters in the ready states (new, waiting), in case of high priority 4. When process switches from running to ready state because of time quantum expire. Knowledge Gate Website Point Non-Pre-emptive Scheduling Pre-emptive Scheduling Once a process starts, it runs to A process can be interrupted and CPU Allocation completion or wait for some event. moved to the ready queue. Can be longer, especially for short Generally shorter, as higher-priority Response Time tasks. tasks can pre-empt others. More complex, requiring careful Complexity Simpler to implement. handling of shared resources. Resource May lead to inefficient CPU Typically more efficient, as it can quickly Utilization utilization. switch tasks. Suitable Batch systems and applications that Interactive and real-time systems Applications require predictable timing. requiring responsive behavior. Knowledge Gate Website Scheduling criteria - Different CPU-scheduling algorithms have different properties, and the choice of a particular algorithm may favour one class of processes over another. So, in order to efficiently select the scheduling algorithms following criteria should be taken into consideration: Knowledge Gate Website CPU utilization: Keeping the CPU as busy as possible. Knowledge Gate Website Throughput: If the CPU is busy executing processes, then work is being done. One measure of work is the number of processes that are completed per time unit, called throughput. Knowledge Gate Website Waiting time: Waiting time is the sum of the periods spent waiting in the ready queue. Knowledge Gate Website Response Time: Is the time it takes to start responding, not the time it takes to output the response. Knowledge Gate Website Note: The CPU-scheduling algorithm does not affect the amount of time during which a process executes or perform I/0; it affects only the amount of time that a process spends waiting in the ready queue. It is desirable to maximize CPU utilization and throughput and to minimize turnaround time, waiting time, and response time. Knowledge Gate Website Terminology Arrival Time (AT): Time at which process enters a ready state. Burst Time (BT): Amount of CPU time required by the process to finish its execution. Completion Time (CT): Time at which process finishes its execution. Turn Around Time (TAT): Completion Time (CT) – Arrival Time (AT), Waiting Time + Burst Time (BT) Waiting Time: Turn Around Time (TAT) – Burst Time (BT) Knowledge Gate Website FCFS (FIRST COME FIRST SERVE) FCFS is the simplest scheduling algorithm, as the name suggest, the process that requests the CPU first is allocated the CPU first. Implementation is managed by FIFO Queue. It is always non pre-emptive in nature. Knowledge Gate Website P. No Arrival Time Burst Time Completion Time Turn Around Time Waiting Time (AT) (BT) (CT) (TAT) = CT - AT (WT) = TAT - BT P0 2 4 P1 1 2 P2 0 3 P3 4 2 P4 3 1 Average Knowledge Gate Website Advantage Easy to understand, and can easily be implemented using Queue data structure. Can be used for Background processes where execution is not urgent. Knowledge Gate Website P. No AT BT TAT=CT-AT WT=TAT -BT P0 0 100 P1 1 2 Average P. No AT BT TAT=CT-AT WT=TAT -BT P0 1 100 P1 0 2 Average Knowledge Gate Website Convoy Effect If the smaller process have to wait more for the CPU because of Larger process then this effect is called Convoy Effect, it result into more average waiting time. Solution, smaller process have to be executed before longer process, to achieve less average waiting time. Knowledge Gate Website Disadvantage FCFS suffers from convoy which means smaller process have to wait larger process, which result into large average waiting time. The FCFS algorithm is thus particularly troublesome for time-sharing systems (due to its non-pre-emptive nature), where it is important that each user get a share of the CPU at regular intervals. Higher average waiting time and TAT compared to other algorithms. Knowledge Gate Website Shortest Job First (SJF)(non-pre-emptive) Shortest Remaining Time First (SRTF)/ (Shortest Next CPU Burst) (Pre-emptive) Whenever we make a decision of selecting the next process for CPU execution, out of all available process, CPU is assigned to the process having smallest burst time requirement. When the CPU is available, it is assigned to the process that has the smallest next CPU burst. If there is a tie, FCFS is used to break tie. Knowledge Gate Website It supports both version non-pre-emptive and pre-emptive (purely greedy approach) In Shortest Job First (SJF)(non-pre-emptive) once a decision is made and among the available process, the process with the smallest CPU burst is scheduled on the CPU, it cannot be pre-empted even if a new process with the smaller CPU burst requirement then the remaining CPU burst of the running process enter in the system. Knowledge Gate Website In Shortest Remaining Time First (SRTF) (Pre-emptive) whenever a process enters in ready state, again we make a scheduling decision weather, this new process with the smaller CPU burst requirement then the remaining CPU burst of the running process and if it is the case then the running process is pre-empted and new process is scheduled on the CPU. This version (SRTF) is also called optimal is it guarantee minimal average waiting time. Knowledge Gate Website P. No Arrival Time Burst Time Completion Time Turn Around Time Waiting Time (AT) (BT) (CT) (TAT) = CT - AT (WT) = TAT - BT P0 1 7 P1 2 5 P2 3 1 P3 4 2 P4 5 8 Average Knowledge Gate Website Advantage Pre-emptive version guarantees minimal average waiting time so some time also referred as optimal algorithm. Provide a standard for other algo in terms of average waiting time. Provide better average response time compare to FCFS. Disadvantage Here process with the longer CPU burst requirement goes into starvation and have response time. This algo cannot be implemented as there is no way to know the length of the next CPU burst. As SJF is not implementable, we can use the one technique where we try to predict the CPU burst of the next coming process. Knowledge Gate Website Priority scheduling Knowledge Gate Website Priority scheduling Here a priority is associated with each process. At any instance of time out of all available process, CPU is allocated to the process which possess highest priority (may be higher or lower number). Tie is broken using FCFS order. No importance to senior or burst time. It supports both non-pre- emptive and pre-emptive versions. In Priority (non-pre-emptive) once a decision is made and among the available process, the process with the highest priority is scheduled on the CPU, it cannot be pre-empted even if a new process with higher priority more than the priority of the running process enter in the system. In Priority (pre-emptive) once a decision is made and among the available process, the process with the highest priority is scheduled on the CPU. if it a new process with priority more than the priority of the running process enter in the system, then we do a context switch and the processor is provided to the new process with higher priority. There is no general agreement on whether 0 is the highest or lowest priority, it can vary from systems to systems. Knowledge Gate Website P. No AT BT Priority CT TAT = CT - AT WT = TAT - BT P0 1 4 4 P1 2 2 5 P2 2 3 7 P3 3 5 8(H) P4 3 1 5 P5 4 2 6 Average Knowledge Gate Website Advantage Gives a facility specially to system process. Allow us to run important process even if it is a user process. Disadvantage Here process with the smaller priority may starve for the CPU No idea of response time or waiting time. Note: - Specially use to support system process or important user process Ageing: - a technique of gradually increasing the priority of processes that wait in the system for long time. E.g. priority will increase after every 10 mins Knowledge Gate Website Round robin This algo is designed for time sharing systems, where it is not, the idea to complete one process and then start another, but to be responsive and divide time of CPU among the process in the ready state(circular). The CPU scheduler goes around the ready queue, allocating the CPU to each process for a maximum of 1 Time quantum say q. Up to which a process can hold the CPU in one go, with in which either a process terminates if process have a CPU burst of less than given time quantum or context switch will be executed and process must release the CPU voluntarily and enter the ready queue and wait for the next chance. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units. Each process must wait no longer than (n - 1) x q time units until its next time quantum. Knowledge Gate Website P. No Arrival Time Burst Time Completion Time Turn Around Time Waiting Time (WT) (AT) (BT) (CT) (TAT) = CT - AT = TAT - BT P0 0 4 P1 1 5 P2 2 2 P3 3 1 P4 4 6 P5 6 3 Average Knowledge Gate Website Advantage Perform best in terms of average response time Works will in case of time-sharing systems, client server architecture and interactive system kind of SJF implementation Disadvantage Longer process may starve Performance depends heavily on time quantum - If value of the time quantum is very less, then it will give lesser average response time (good but total no of context switches will be more, so CPU utilization will be less), If time quantum is very large then average response time will be more bad, but no of context switches will be less, so CPU utilization will be good. No idea of priority Knowledge Gate Website Multi Level-Queue Scheduling After studying all important approach to CPU scheduling, we must understand anyone of them alone is not good for every process in the system, as different process have different scheduling needs so, we must have a kind of hybrid scheduling idea, supporting all classes of processes. Here processes are easily classified into different groups. System process foreground (interactive) processes background (batch) processes. A multilevel queue scheduling algorithm, partitions the ready queue into several separate queues. The processes are permanently assigned to one queue, generally based on properties and requirement of the process. Knowledge Gate Website Each queue has its own scheduling algorithm. For example System process might need priority algorithm Interactive process might be scheduled by an RR algorithm Batch processes is scheduled by an FCFS algorithm. In addition, there must be scheduling among the queues, which is commonly implemented as fixed-priority preemptive scheduling or round robin with different time quantum. Knowledge Gate Website Multi-level Feedback Queue Scheduling Problem with multi-level queue scheduling is how to decide number of ready queue, scheduling algorithm inside the queue and between the queue and once a process enters a specific queue we can not change and queue after that. The multilevel feedback queue scheduling algorithm, allows a process to move between queues. The idea is to separate processes according to the characteristics of their CPU bursts. If a process uses too much CPU time, it will be moved to a lower-priority queue. In addition, a process that waits too long in a lower-priority queue may be moved to a higher-priority queue. This form of aging prevents starvation. A process entering the ready queue is put in queue 0. A process in queue 0 is given a time quantum of 8 milliseconds. If it does not finish within this time, it is moved to the tail of queue 1. If queue 0 is empty, the process at the head of queue 1 is given a quantum of 16 milliseconds. If it does not complete, it is preempted and is put into queue 2. Processes in queue 2 are run on an FCFS basis but are run only when queues 0 and 1 are empty. Knowledge Gate Website In general, a multilevel feedback queue scheduler is defined by the following parameters: The number of queues The scheduling algorithm for each queue The method used to determine when to upgrade a process to a higher- priority queue The method used to determine when to demote a process to a lower- priority queue. The definition of a multilevel feedback queue scheduler makes it the most general CPU-scheduling algorithm. It can be configured to match a specific system under design. Unfortunately, it is also the most complex algorithm, since defining the best scheduler requires some means by which to select values for all the parameters. Knowledge Gate Website Knowledge Gate Website Process Synchronization & Race Condition As we understand in a multiprogramming environment a good number of processes compete for limited number of resources. Concurrent access to shared data at some time may result in data inconsistency for e.g. P () { read ( i ); i = i + 1; write( i ); } Race condition is a situation in which the output of a process depends on the execution sequence of process. i.e. if we change the order of execution of different process with respect to other process the output may change. Knowledge Gate Website General Structure of a process P() { Initial Section: Where process is accessing private While(T) resources. { Entry Section: Entry Section is that part of code where, Initial Section each process request for permission to enter its critical Entry Section section. Critical Section Exit Section Critical Section: Where process is access shared resources. Remainder Section } Exit Section: It is the section where a process will exit } from its critical section. Remainder Section: Remaining Code. Knowledge Gate Website Criterion to Solve Critical Section Problem Mutual Exclusion: No two processes should be present inside the critical section at the same time, i.e. only one process is allowed in the critical section at an instant of time. Progress: If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next(means other process will participate which actually wish to enter). there should be no deadlock. Bounded Waiting: There exists a bound or a limit on the number of times a process is allowed to enter its critical section and no process should wait indefinitely to enter the CS. Knowledge Gate Website Some Points to Remember Mutual Exclusion and Progress are mandatory requirements that needs to be followed in order to write a valid solution for critical section problem. Bounded waiting is optional criteria, if not satisfied then it may lead to starvation. Knowledge Gate Website Solutions to Critical Section Problem We generally have the following solutions to a Critical Section Problems: 1. Two Process Solution 1. Using Boolean variable turn 2. Using Boolean array flag 3. Peterson’s Solution 2. Operating System Solution 1. Counting Semaphore 2. Binary Semaphore 3. Hardware Solution 1. Test and Set Lock 2. Disable interrupt Knowledge Gate Website Two Process Solution In general it will be difficult to write a valid solution in the first go to solve critical section problem among multiple processes, so it will be better to first attempt two process solution and then generalize it to N-Process solution. There are 3 Different idea to achieve valid solution, in which some are invalid while some are valid. 1- Using Boolean variable turn 2- Using Boolean array flag 3- Peterson’s Solution Knowledge Gate Website Here we will use a Boolean variable turn, which is initialize randomly(0/1). P0 P1 while (1) while (1) { { while (turn! = 0); while (turn! = 1); Critical Section Critical Section turn = 1; turn = 0; Remainder section Remainder Section } } Knowledge Gate Website The solution follows Mutual Exclusion as the two processes cannot enter the CS at the same time. The solution does not follow the Progress, as it is suffering from the strict alternation. Because we never asked the process whether it wants to enter the CS or not? Knowledge Gate Website Here we will use a Boolean array flag with two cells, where each cell is initialized to F P0 P1 while (1) while (1) { { flag = T; flag = T; while (flag ); while (flag ); Critical Section Critical Section flag = F; flag = F; Remainder section Remainder Section } } Knowledge Gate Website This solution follows the Mutual Exclusion Criteria. But in order to achieve the progress the system ended up being in a deadlock state. Knowledge Gate Website Dekker's algorithm Pi Pj do do { { flag[i] = true; flag[j] = true; while (flag[j]) while (flag[i]) { { if (turn == j) if (turn == i) { { flag[i] = false; flag[j] = false; while (turn == j) ; while (turn == i) ; flag[i] = true; flag[j] = true; } } } } turn = j; turn = i; flag[i] = false; flag[j] = false; } } while (true); while (true); Knowledge Gate Website Peterson's solution is a classic Software-based solution to the critical-section problem for two process. We will be using both: turn and Boolean flag. P0 P1 while (1) while (1) { { flag = T; flag = T; turn = 1; turn = 0; while (turn = = 1 && flag = = T); while (turn = = 0 && flag = =T); Critical Section Critical Section flag = F; flag = F; Remainder section Remainder Section } } This solution ensures Mutual Knowledge Gate Exclusion, Website Progress and Bounded Wait. Operation System Solution (Semaphores) 1. Semaphores are synchronization tools using which we will attempt n-process solution. 2. A semaphore S is a simple integer variable that, but apart from initialization it can be accessed only through two standard atomic operations: wait(S) and signal(S). 3. The wait(S) operation was originally termed as P(S) and signal(S) was originally called V(S). Wait(S) Signal(S) { { while(s