🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Learn and www.lmu.edu.ng Live COM 306 OPERATING SYSTEMS PROCESS MANAGEMENT AND SCHEDULING Lecturer: Akande Noah O. (Ph. D.) Process Management in OS â–ª Process is a program that is under execution or a p...

Learn and www.lmu.edu.ng Live COM 306 OPERATING SYSTEMS PROCESS MANAGEMENT AND SCHEDULING Lecturer: Akande Noah O. (Ph. D.) Process Management in OS ▪ Process is a program that is under execution or a program that is in its execution phase ▪ Every system consists of many processes such as operating system processes and user processes. ▪ Operating system processes execute system code where as user processes execute user code. 2 Process Management in OS ▪ The execution of a process must progress in sequential order or based on some priority or algorithms. ▪ Therefore, the OS must allocate resources that enable processes to share and exchange information. ▪ It also protects the resources of each process from other methods and allows synchronization among processes 3 Process vs Program ▪ Process is the unit of work in the system. ▪ The work in this instance is the program that can involve multiple processes. ▪ The terms process, job and task are used interchangeably ▪ Program is a passive entity but process is an active entity ▪ Whenever an executable file or program is loaded into the memory, program creates one or more processes. 4 Process Management in OS ▪ It is the job of OS to manage all the running processes in the system. ▪ It handles operations by performing tasks like process scheduling and resource allocation. 5 Process Architecture ▪ Here, is an Architectural diagram of the Process – Stack: The Stack stores temporary data like function parameters, returns addresses, and local variables. – Heap: Allocates memory, which may be processed during its run time. – Data: It contains the global variables. – Text: Text Section includes the current activity, which is represented by the value of the Program Counter or content of the processor registers 6 Process States ▪ A process state is a condition of the process at a specific instance of time. ▪ It also defines the current position of the process. ▪ There are mainly seven stages of a process which are: 7 Process States ▪ There are mainly seven stages of a process which are: – New: The new process is created when a specific program is called from the secondary memory/ hard disk to the primary memory/ RAM – Ready: In a ready state, the process has been loaded into the primary memory and is waiting to be assigned the CPU by the short term scheduler. – Waiting: The process is waiting for some event such as I/O to occur. 8 Process States – Execution/Running: The process instructions are currently being executed/implemented. – Blocked: It is a time interval when a process is waiting for an event like I/O operations to complete, needs input from the user, or needs access into a critical region. – Suspended: this is when a process that is already in the ready state (primary memory) is moved back into the secondary memory due to a higher priority process that needs its resources – Terminated: The process has completed its execution. – After completing every step, all the resources are used by a process, and memory becomes free. 9 10 11 EVENTS ▪ During a process execution, the following events may occur: – Process issues an I/O request and then be placed in an I/O queue. – Process creates a new subprocess and must wait till the subprocess terminates. – Due to an interruption, the process could be removed forcefully from the CPU and be put back in the ready queue. 12 How is the ready queue is processed? ▪ CPU Scheduler or Short Term Scheduler takes the process from the Ready queue and puts it in the CPU for execution. ▪ When the new process gets created and ready for execution, it is put into the ready queue. ▪ The operating system allocates the CPU time to the ready process. ▪ After getting the CPU time, the process executes or runs in the CPU. 13 What happens during an Interrupt Event ? ▪ During execution, several events may occur. ▪ Due to an interrupt event, the process could be removed forcefully from the CPU and be put back in the ready queue. 14 What happens during an I/O Event ? ▪ When the input is ready or when the output is completed, it is sent to the ready queue for CPU processing. 15 How does a process move into a ready queue? ▪ When a process gets created, it is put in a job pool. ▪ This pool consists of all the processes in the system. ▪ The Job scheduler also called a Long Term Scheduler, takes the job or process from a Job - pool and puts it in the ready queue. 16 Process Control Blocks (PCB) ▪ Every process is represented in the operating system by a process control block, which is also called a task control block. ▪ The PCB is a data structure that is maintained by the Operating System for every process. ▪ The PCB should be identified by an integer Process ID (PID). ▪ It helps you to store all the information required to keep track of all the running processes. ▪ It is also accountable for storing the contents of processor registers. ▪ These are saved when the process moves from the running state and then returns back to it. ▪ The information is quickly updated in the PCB by the OS as soon as the process makes the state transition. 17 Components of PCB 18 Components of a PCB ▪ Process state: this is the current state of the process. – It can be any one of the five states (new, ready, running, waiting and terminate). ▪ Program counter: The program counter lets you know the address of the next instruction which should be executed for a particular process. ▪ Process number: a unique identification id for every process. ▪ CPU registers: These are temporary memory used by the CPU during process execution. They include accumulators, index and general- purpose registers, and information of condition code. ▪ CPU scheduling information: These are information such as a process priority, pointers for scheduling queues, and various other scheduling parameters. 19 Components of a PCB ▪ Accounting and business information: contains information about the amount of CPU and real time used, time limits, account numbers, job or process numbers, and so on. ▪ Memory-management information: This information includes the value of the base and limit registers, the page, or segment tables. ▪ I/O status information: contains information of I/O devices allocated to the process, a list of open files, and so on. ▪ In short, PCB acts as the repository of information. This information may vary from process to process. 20 Process Scheduling ▪ In multi-programming, there are more than one process present in the memory at a time. ▪ When a running process has to wait, the operating system removes the CPU time allocated to it and assigns the CPU time to another process. ▪ The process of allocating the CPU from one running process to another process is called CPU Scheduling. 21 Process Scheduling ▪ Process scheduling allows OS to allocate a time interval of CPU execution for each process. ▪ Another important reason for using a process scheduling system is that it keeps the CPU busy all the time. ▪ This allows you to get the minimum response time for programs. 22 Scheduling Objectives ▪ Here, are important objectives of Process scheduling: 1. Maximize the number of interactive users within acceptable response times. 2. Achieve a balance between response and utilization. 3. Avoid indefinite postponement and enforce priorities. 4. It also should give reference to the processes holding the key resources. 23 Process Scheduling Queues ▪ Process Scheduling Queues help you to maintain a distinct queue for each and every process states and PCBs. ▪ All the process of the same execution state are placed in the same queue. ▪ Therefore, whenever the state of a process is modified, its PCB needs to be unlinked from its existing queue, which moves back to the new state queue. 24 Process Scheduling Queues ▪ Three types of operating system queues are: – 1. Ready queue – This queue consists of processes which are residing in the main memory and are ready and waiting for execution. – CPU Scheduler or Short Term Scheduler takes the process from Ready queue and puts it in the CPU for execution. – The process to be put in the CPU is decided by a Scheduling Algorithm. 25 Process Scheduling Queues- Device Queues 2. Device queue contains the processes which are waiting for the completion of an I/O request. Each device has its own device queue. Devices usually have device controller hardware and device driver software that work as part of the OS to control it. Many Device drivers have device queues that are used to handle IO requests specific to the device. 26 Process Scheduling Queues- Device Queues For example, if you type in a sentence using your keyboard, the sentence is received by controller and put on Input queue. This IO queue is read by a driver(part of OS) and put on an input queue. From input data queue, it is moved to the ready queue for CPU processing. The Input requests come from Keyboard, Mouse, touchscreen and other such devices. After CPU processing, output requests are sent to Output device. 27 Process Scheduling Queues ▪ 3. Job queue – It is used to store all the processes in the system. 28 Process vs Threads ▪ A process is a program being executed. ▪ A process can be further divided into independent units known as threads. ▪ A thread is like a small light-weight process within a process. ▪ Also, a collection of threads can be called a process. 29 Threads ▪ A thread is a single sequence stream within a process. ▪ It is also called a separate execution path within a process ▪ It is a lightweight process that the operating system can schedule and run concurrently with other threads. ▪ The operating system creates and manages threads, and they share the same memory and resources as the program that created them. ▪ This enables multiple threads to collaborate and work efficiently within a single program. 30 Threads ▪ Therefore, each thread belongs to exactly one process. ▪ In an operating system that supports multithreading, the process can consist of many threads. ▪ But threads can be effective only if CPU is more than one otherwise two threads have to context switch for that single CPU. 31 Differences between Threads and Processes ▪ Resources: Processes have their own address space and resources, such as memory and file handles, whereas threads share memory and resources with the program that created them. ▪ Scheduling: Processes are scheduled to use the processor by the operating system, whereas threads are scheduled to use the processor by the operating system or the program itself. ▪ Creation: The operating system creates and manages processes, whereas the program or the operating system creates and manages threads. ▪ Communication: Because processes are isolated from one another and must rely on inter-process communication mechanisms, they generally have more difficulty communicating with one another than threads do. – Threads, on the other hand, can interact with other threads within the same program directly. 32 Why Do We Need Thread? ▪ Threads run in parallel improving the application performance. Each such thread has its own CPU state and stack, but they share the address space of the process and the environment. ▪ Threads can share common data so they do not need to use inter- process communication. – Like the processes, threads also have states like ready, executing, blocked, etc. ▪ Priority can be assigned to the threads just like the process, and the highest priority thread is scheduled first. ▪ Each thread has its own Thread Control Block (TCB). ▪ Like the process, a context switch occurs for the thread, and register contents are saved in (TCB). 33 Why Multi-Threading? ▪ Multi-threading is aimed at achieving parallelism by dividing a process into multiple threads. – For example, in a browser, multiple tabs can be different threads. MS Word uses multiple threads: one thread to format the text, another thread to process inputs, etc. ▪ Therefore, multithreading is a technique used in operating systems to improve the performance and responsiveness of computer systems. ▪ Multithreading allows multiple threads (i.e., lightweight processes) to share the same resources of a single process, such as the CPU, memory, and I/O devices. 34 Types of Threads ▪ User Level thread (ULT) – User Level Thread is a type of thread that is not created using system calls but are implemented in the user level library. ▪ The kernel has no work in the management of user-level threads, so it sees them as if they were single-threaded processes. 35 Advantages of User-Level Threads ▪ Implementation of the User-Level Thread is easier than Kernel Level Thread. ▪ Context Switch Time is less in User Level Thread. ▪ User-Level Thread is more efficient than Kernel- Level Thread. ▪ Because of the presence of only Program Counter, Register Set, and Stack Space, it has a simple representation. 36 Types of Threads ▪ Kernel Level Thread (KLT) – Kernel knows and manages the threads. ▪ Instead of thread table in each process, the kernel itself has thread table that keeps track of all the threads in the system. ▪ In addition, kernel also maintains the traditional process table to keep track of the processes. ▪ OS kernel provides system call to create and manage threads. 37 Advantages of Kenel-Level Threads ▪ It has up-to-date information on all threads. ▪ Applications that block frequency are to be handled by the Kernel-Level Threads. ▪ Whenever any process requires more time to process, Kernel-Level Thread provides more time to it. 38 Types of Threads ▪ In summary, each ULT has a process that keeps track of the thread using the Thread table ▪ Also, each KLT has Thread Table (TCB) as well as the Process Table (PCB). 39 Components of Threads ▪ A. Stack Space – The stack is used to store local variables, pass parameters in function calls, store return addresses, and create stack frames containing space for the function’s local variables. – The stack size is determined when the thread is created since it needs to occupy contiguous address space. – That means that the entire address space for the thread’s stack has to be reserved at the point of creating the thread. – If the stack is too small then it can overflow, which is an error condition known as stack overflow 40 Components of Threads ▪ B. Register Set ▪ A thread’s register set is a collection of CPU registers that are used to store the thread’s state. ▪ The register set includes the program counter (PC), stack pointer (SP), and other registers that are used to store the thread’s context. ▪ When a thread is created, it is assigned its own register set, which is used to store the thread’s state while it is running. ▪ The register set is saved when the thread is suspended and restored when the thread resumes execution. ▪ The kernel debugger can use the.thread command to set the register context for a specific thread, which allows it to access the most important registers and the stack trace for that thread. 41 Components of Threads ▪ C. Program Counter ▪ The Program Counter (PC) is a register that stores the address of the next instruction to be executed by the processor. ▪ Each CPU has a single hardware program counter, and each thread has a program counter value that is only loaded into the hardware program counter when the thread is executing. ▪ A process may have multiple hardware program counters if executing on a multiple processing system. ▪ Each thread could be running on separate processor and have a program counter on that processor. 42 Lifecycle of a thread ▪ The following are the stages a thread goes through in its whole life. ▪ New: The lifecycle of a new thread starts in this state. It remains in this state till a program starts. ▪ Runnable: A thread becomes runnable after it starts executing the task given to it. ▪ Waiting: While waiting for another thread to perform a task, the currently running thread goes into the waiting state and then transitions back again after receiving a signal from the other thread. ▪ Timed Waiting: A runnable thread enters into this state for a specific time interval and then transitions back when the time interval expires or the event the thread was waiting for occurs. ▪ Terminated (Dead): A thread enters into this state after completing its task. 43 Types of Execution in OS ▪ There are two types of execution: ▪ Concurrent Execution: This occurs when a processor is successful in switching resources between threads in a multithreaded process on a single processor. ▪ Parallel Execution: This occurs when every thread in the process runs on a separate processor at the same time and in the same multithreaded process 44 Learn and www.lmu.edu.ng Live Process Schedulers ▪ A scheduler is a type of system software that allows you to handle process scheduling. ▪ There are mainly three types of Process Schedulers: – Long Term – Short Term – Medium Term 46 Long Term Scheduler ▪ Long term scheduler is also known as a job scheduler. ▪ This scheduler regulates the program and select process from the queue and loads them into the memory for execution. ▪ It also regulates the degree of multi-programing. ▪ However, the main goal of this type of scheduler is to offer a balanced mix of jobs, like Processor, I/O jobs., that allows managing multiprogramming. 47 Medium Term scheduler ▪ Medium-term scheduler enables the OS to handle swapping between processes. ▪ In this scheduler, a running process can become suspended because of an I/O request. ▪ A suspended processes can’t make any progress towards completion. ▪ In order to remove the process from memory and make space for other processes, the suspended process should be moved to secondary storage. 48 Short Term scheduler ▪ Short term scheduling is also known as CPU scheduler. ▪ The main goal of this scheduler is to boost the system performance according to set criteria. ▪ This helps the OS to select from a group of processes that are ready to execute and allocate CPU to them. ▪ The OS dispatcher gives control of the CPU to the process selected by the short term scheduler. 49 What is Context Switch? ▪ It is a method used by the OS to store/restore the state of a process in a PCB. ▪ So that process execution can be resumed from the same point at a later time. ▪ The context switching method is important for a multitasking OS. 50 CPU Scheduling Algorithms ▪ Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. ▪ The selection process is carried out by the short-term scheduler (or CPU scheduler). ▪ The scheduler selects from among the processes in memory that are ready to execute and allocates the CPU to one of them. 51 CPU Scheduling Decisions ▪ CPU scheduling decisions may take place under the following four circumstances: 1. When a process switches from the running state to the waiting state (for I/O request or invocation of wait for the termination of one of the child processes). 2. When a process switches from the running state to the ready state (for example, when an interrupt occurs). 3. When a process switches from the waiting state to the ready state (for example, completion of I/O). 4. When a process terminates 52 Types of CPU Scheduling ▪ There are two kinds of Scheduling methods: ▪ Preemptive Scheduling ▪ Non-Preemptive Scheduling 53 Types of CPU Scheduling : Preemptive Scheduling ▪ In preemptive scheduling, the resources are mainly allocated to the process for a limited amount of time after which they are taken away ▪ The process is again placed back in the ready queue if that process still has a CPU burst time remaining. ▪ That process stays in the ready queue until it gets the next chance to execute. 54 Types of CPU Scheduling : Preemptive Scheduling ▪ In summary, Preemptive scheduling occurs: – When the process switches from running to ready state due to an interruption. – When the process switches from waiting to ready state after completion of I/O request. 55 Types of CPU Scheduling : Preemptive Scheduling ▪ Some algorithms that are based on preemptive scheduling are Round Robin Scheduling (RR), Shortest Remaining Time First (SRTF), Priority (preemptive version) Scheduling, etc. 56 Types of CPU Scheduling : Non-Preemptive Scheduling ▪ In non-preemptive scheduling, it does not interrupt a process running in the middle of the execution. ▪ Instead, it waits till the process completes its task before allocating the resources to another process that needs it. ▪ Some Algorithms based on non-preemptive scheduling are: Shortest Job First Scheduler and Priority Scheduler etc. 57 Types of CPU Scheduling : Non-Preemptive Scheduling ▪ NonPreemptive scheduling occurs: – When the process switches from running to waiting state for I/O request. – When a process terminates. 58 Important CPU Scheduling Terminologies ▪ Burst Time/Execution Time: It is a time required by the process to complete execution. It is also called running time. ▪ Arrival Time: when a process enters in a ready state ▪ Finish Time: the time taken by a process to complete, exits from the memory (all assigned resources are withdrawn) 59 Important CPU Scheduling Terminologies ▪ The execution of process consists of a cycle of CPU burst and I/O burst. ▪ Usually it starts with CPU burst and then followed by I/O burst, another CPU burst, another I/O burst and so on. ▪ This cycle will continue until the process is terminated. ▪ Burst cycle therefore is the time needed by a process to receive the CPU time and I/O resources needed for its execution. 60 CPU Scheduling Criteria ▪ CPU scheduling criteria helps to compare and choose the CPU scheduling algorithm which works best for us. ▪ We are going to learn about the 5 important criteria: CPU utilization, Throughput, Turnaround time,Waiting time and Response time. 61 CPU Scheduling Criteria ▪ CPU Utilization: – CPU utilization is the period of time the CPU needs to be active. – It can range from 0 to 100 percent. – However, for the RTOS, it can be range from 40 percent for low-level and 90 percent for the high-level system. 62 CPU Scheduling Criteria- Throughput ▪ Throughput – It is the total number of processes completed per unit time or rather says the total amount of work done in a unit of time. – In case of long processes, it may be one process per hour whereas for short processes, it may be 10 processes per second. 63 CPU Scheduling Criteria ▪ Turnaround Time ▪ It is the time duration from process submission to process execution completion. ▪ It is the sum of: – time taken to get memory allocation – waiting time spent in the ready queue – execution time on CPU – I/O time taken 64 CPU Scheduling Criteria ▪ Waiting Time – The sum of the periods spent waiting in the ready queue, i,e the amount of time a process has been waiting in the ready queue to acquire or get control on the CPU. ▪ Response Time – Amount of time it takes from when a request was submitted until the first response is produced. – Remember, it is the time till the first response and not the completion of process execution(final response). – In general CPU utilization and Throughput are maximized and other factors are reduced for proper optimization. 65 CPU Scheduling Criteria ▪ Waiting Time – It is the time it takes a process to be in the ready queue. ▪ Response time – It is the amount of time taken from the request submission until the first response is produced. 66 CPU Scheduling Criteria 67

Use Quizgecko on...
Browser
Browser