Plat.-Tech.-Module-3 PDF

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

MiraculousVector

Uploaded by MiraculousVector

Eastern Visayas State University

Tags

process management operating systems process scheduling computer science

Summary

This module covers process management, including process life cycles, process control blocks, CPU scheduling algorithms, and context switching. It discusses concepts like threads, single-threading, and multithreading.

Full Transcript

# MODULE 3 ## PROCESS MANAGEMENT ### What is this all about? The topic will cover the operating system, processes and scheduling algorithms. Discussions will include process life cycle, operations on the process, process control block, process scheduling, process states and scheduling algorithms....

# MODULE 3 ## PROCESS MANAGEMENT ### What is this all about? The topic will cover the operating system, processes and scheduling algorithms. Discussions will include process life cycle, operations on the process, process control block, process scheduling, process states and scheduling algorithms. ### What should you know? Upon completion of this module, you will be able to: 1. Explain concepts such as process control block, context switching, operations on processes, and concurrent processing. 2. Appreciate how threads can be an alternative to concurrent processing. 3. Discuss how CPU scheduling can affect the overall performance of a computer system. 4. Enumerate and describe the several CPU scheduling algorithms used by operating systems. ### Process A process is described as a program in execution and this program is stored on any secondary storage. An Operating System performs various jobs and applications. For the program to be executed, each process should be loaded first onto the system's primary memory (RAM or cache). Each process is assigned a single ID when it is generated, and it will be referenced via this unique ID and is terminated until the process completes its execution. Our program is created by typing it into a text editor and execute (or run). When this program is run and loaded into memory, it becomes a process that executes all that is declared in the program. The program can be split into four pieces. The figure below shows a process in memory: - **Stack**: This comprises temporary data (such as method/function parameters, return address, and local variables). - **Heap**: This is dynamic memory allocation to a process during its run time. - **Text**: This is composed of the compiled program code. When the program is launched, it is read in from secondary storage. - **Data**: It is composed of the static and global variables, assigned and initialized before executing the main. ### Process Life Cycle The current activity of that process usually defines the state of a process. Below are the five broad states of a process across all operating systems: - **New (Create)**: The process is created. - **Ready**: The process is ready to run. Once the process has been created, it enters the ready state and will be loaded into the primary key memory. Then, the process gets ready to run and waits for the CPU time to execute. - **Run**: The CPU selects the process for execution. - **Blocked or wait**: The process needs to wait for a file to become free or for the user's input, the OS moves this process to the blocked or wait state. The process keeps waiting in the primary memory. Once the I/O operation is completed, the process goes to the ready state. - **Terminated or Exit**: The process has finished execution normally or has been aborted. ### Process Control Block Each process in the operating system is represented by a process control block (PCB). It is a data structure that comprises the following: - **Process State**: The state can be new, ready, running, waiting, or halted. - **Program Counter**: Indicates the address of the next instruction to be executed. - **CPU Registers**: Includes accumulators, stack pointers, index registers. To let the process be continued, this state must be saved in case the interrupt occurs. - **CPU-scheduling information**: Includes process priority and pointers to scheduling queues and other scheduling parameters. - **Memory-management information**: This includes page tables or segment tables. - **Accounting information**: Includes the amount of CPU time consumed, account numbers, limits, and so on. - **I/O status information**: Includes I/O devices allocated, I/O requests, open file tables, and so on. ### Process Scheduling On a computer system, there are often several processes that need to run together. Also, the requests for resources needed for their execution made asynchronously. Thus, to handle competing requests for resources, including the processor, the OS uses a process scheduler. The process scheduler gives each process the necessary resources and its turn for execution on the CPU. An underlying scheduling algorithm decides to schedule a process. - **Job queue**: The job queue is the set of all processes on the system. - **Ready queue**: The ready queue are loaded in the primary storage that has all the processes. These processes are ready and waiting for their turn to execute as soon as the CPU becomes available. - **Device queue**: The set of processes waiting for an I/O device to become free, such as a printer. This queue is known as the Blocked Queue. ### Two-State Process Model - **Running**: The process is being run or executed by the CPU is said to be in running state. - **Not Running state**: Waits for their turn to run or execute. If a process is interrupted, it will be moved to the waiting queue. The dispatcher scans a process from the queue to execute. Once the process has been completed or halted, the process is removed. ### Three Types of Process Schedulers 1. **Long-term or job scheduler**: - The selected process from the job pool will be in the ready state for execution. - It controls the degree of multiprogramming (i.e., the current number of processes in the ready queue) to avoid slow performance. 2. **Short-term or CPU scheduler**: - It decides which process to be executed next from the ready state to schedule to the running state. The short-term scheduler does not load the process on running state. 3. **Medium-term or Swapping time**: - To ensure that the main memory is best utilized, the process is swap from main memory to secondary memory or vice-versa. - It reduces the degree of multiprogramming. ## Context Switching Suppose there are two running processes P<sub>0</sub> and P<sub>1</sub>. Based on some scheduling algorithms, process P<sub>0</sub> is allocated to the CPU first. After P<sub>0</sub> has finished execution or after a specific time interval has elapsed, the CPU is allocated to the next process, which is P<sub>1</sub>. Again, P<sub>1</sub> runs until it finishes, or its time slice has elapsed. Then, the CPU is given back to P<sub>0</sub>, and so on. The cycle repeats while there are running processes in memory, waiting for the CPU. When a CPU switches from one process to another, the old process state must be saved. Then, the state of the new process is loaded. This is called **context switching**. When a context switch occurs, the kernel saves the old process context in its PCB and loads the saved context of the new process scheduled to run. In the case of P<sub>0</sub>, its state must be saved before transferring control to P<sub>1</sub>. If P<sub>0</sub> has not finished execution, it will be allocated to the CPU again later on, and it must be able to return to its previous state. ### Threads A thread is a flow of execution within a process. Threading is widely used and seen over the internet nowadays, where we are using transaction processing of every type like buying products, online transfer, and banking. A basic unit of CPU utilization is called a thread. ### Two Types of Threads 1. **User Threads**: - User threads are executed by a user. - User threads are usually fast and easy to implement. - Context switch is faster. - The entire process gets blocked, if one user level threads perform blocking operation. 2. **Kernel Threads**: - Kernel threads are executed by an operating system. - Kernel threads are slower and complicated. - Context switch is slower. - If one kernel thread is blocked, others are not affected. ### Single-threading There is only one counter specifying the next process to execute in single-threading when running a program - an example, word processor, the user cannot run the spelling and grammar within the same process. In a single thread, it performs only one task at one-time processing. ### Multithreading In modern computers and mobile devices, most of the software applications that run are multithreaded. Multithreading allows processing multiple threads at one time. It improves the stability of programs and may prevent a program from crashing. Since each thread is handled separately, if one thread has an error, it should not affect the rest of the program. An example of multithreaded applications. In an email, multiple tabs can be different threads: one thread to compose mail, another thread to print a mail, another thread setting Google account, and the last thread is the new task. ### Scheduling Algorithm Scheduling is a central or main function of the operating system. Almost all of the system resources must be scheduled before use, including the CPU. The CPU selects from the ready queue the first process to execute. ### Classification of CPU Scheduling: - **Non-preemptive scheduling**: In non-preemptive CPU scheduling, if the CPU is assigned to a process and starts executing, the CPU cannot be taken away from that process until it completes its execution. Other processes present in the ready queue have to wait for its turn and cannot forcefully get the CPU. - **Preemptive scheduling**: In preemptive CPU scheduling, even though the CPU is assigned to a process and is already executing, the CPU scheduler may assign another process in the ready queue. A good scheduler should optimize the following performance criteria: - **CPU utilization**: CPU utilization refers to how busy the processor is. Usually, this is shown in percentage. - **Throughput**: The number of processes completed per unit time. - **Turnaround time**: The interval from the time a process is submitted until it is finished To get the turnaround time, we compute the process in the ready queue, executing and doing I/O. - **Response time**: This is the time between the submission of the process until it starts responding. - **Waiting time**: The total amount of time a process spends in the ready queue waiting for their turn to get on the CPU. ## Different CPU Scheduling Algorithms: - **First-Come, First-Served algorithm**: The CPU will execute first the one that arrives first at the ready queue. The main advantage is the ease of implementation. The main disadvantage is generally, the average waiting time is long. - **Shortest Job First algorithm**: Shortest job first (SJF) selects the waiting process with the lowest CPU burst time from the list of available processes in the ready queue, then executes next. SJF is a non-preemptive algorithm. - **Shortest Remaining Time First algorithm (SRTF)**: SRTF is a preempted version of SJF (Shortest Job First). In SRTF, after a certain amount of time, the process can be stopped. No preemption will happen when all the processes are in the ready queue and work as SJF scheduling. - **Round Robin algorithm (RR)**: It is the preemptive scheduling algorithm. The process are selected on a first-come, first-served basis for a fixed length of time. Round robin make use of time slice (fixed time period) for the execution of the process, called time quantum. After the time quantum terminates, the running process is preempted, and the next process execution starts for the given time period. Then, the CPU is assigned to the next arrived process. - **Priority scheduling**: This algorithm may be non-preemptive and preemptive. Each process is assigned a priority. The CPU scheduler selects the process in the ready queue with the highest priority to execute next. In general, the lower the priority number, the higher is the priority of the process. - **Non-Preemptive Scheduling**: The process with less arrival time will be processed first. If two processes have the same arrival time, then the process with the highest priority is to be executed first. The lower the priority number, gets higher the priority of the process. If the process has the same priority, the FCFS basis is executed. It will run until the completion once the process gets scheduled. - **Preemptive Scheduling**: In preemptive priority scheduling, the CPU will preempt if the new arrival process priority is higher than the presently running process priority. First Come First Serve basis is executed if the process has the same priority. If all the jobs are in the ready queue, the algorithm will behave as non-preemptive priority scheduling. No preemption will be done, which means the job scheduled will run till the completion. ## **Self Assessment Questions** 1. How the operating system manages processes? 2. Explain the word PCB and Context switching. 3. Describe the Process Life Cycle. 4. What are the differences between user-level threads and kernel-level threads? Under what conditions is one type better than the other? 5. Describe the actions a kernel takes to context switch between processes. 6. Explain the difference between preemptive and non-preemptive scheduling. State why strict non-preemptive scheduling is unlikely to be used in a computer center. # **Practice Exercise** Review the following set of processes, with the length of the CPU-burst time given in milliseconds: | Process | Burst Time | Priority | |---|---|---| | P<sub>1</sub> | 10 | 3 | | P<sub>2</sub> | 1 | 1 | | P<sub>3</sub> | 2 | 3 | | P<sub>4</sub> | 1 | 4 | | P<sub>5</sub> | 5 | 2 | Assumed that the processes arrived in the order P<sub>1</sub>, P<sub>2</sub>, P<sub>3</sub>, P<sub>4</sub>, P<sub>5</sub>, all at time 0. a. Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF, a non-preemptive priority (a smaller priority number implies a higher priority), and RR (quantum = 1) scheduling. b. Compute for the turnaround time of each process for each of the scheduling algorithms in part a? c. Compute for the waiting time of each process for each of the scheduling algorithms in part a? d. Which of the schedules, in part a results in the minimal average waiting time (overall processes)? ## **References** - Elmasri, R., A., G., & Levine, D. (2010). Operating Systems: A Spiral Approach. New York: McGraw Hill. - Hand, S. (2010). Operating Systems. - Silberschatz, A., Gagne, G., & Galvin, P. B. (2018). Operating System Concepts (Tenth ed.). Hoboken, NJ: John Wiley & Sons, Inc.

Use Quizgecko on...
Browser
Browser