Operating Systems - Module 1 - Introduction
Document Details
JIS University
Tags
Related
- Professor Messer's CompTIA 220-1102 A+ Course Notes (2022) PDF
- Operating Systems Notes PDF
- Distributed Systems Lecture Notes (2018-19) - MRCET Hyderabad PDF
- Computer Skills & Programming Concepts I Lectures 1-6 PDF
- Introduction to Computing Science Lecture Notes PDF
- Operating System Concepts Lecture Notes PDF
Summary
This document is a module on introductory topics about operating systems for students. It includes learning objectives, course outcomes, and course content. It is presented in a slide format.
Full Transcript
Course Title: Operating Systems Course Code: YCS5001 Total Contact Hours: 36 1 Course Code YCS5001 Course Title Operating Systems Category Professional Core LTP & Credits L T P...
Course Title: Operating Systems Course Code: YCS5001 Total Contact Hours: 36 1 Course Code YCS5001 Course Title Operating Systems Category Professional Core LTP & Credits L T P Credits 3 0 0 3 Total Contact Hours 36 Pre-requisites a) Data Structures and Algorithms b) Computer Organization and Architecture 2 Learning Objective In this course, the students will learn about the role of operating system as the interface between application programs and the computer hardware. The role of operating system in managing various computer resources shall be dealt with in detail. The course will be very helpful for the students in strengthening their skills in handling large software projects. 3 Course Outcome CO1: To explain the role of operating system and how it acts as interface between hardware and software. CO2: To contrast the concepts of processes and threads, and how they are scheduled. CO3: To demonstrate the use of various synchronization tools in solving the critical section problem. CO4: To explain and classify the various memory management techniques including virtual memory. CO5: To apply the knowledge of data structures to explain how le systems can be implemented on secondary storage. 4 Course Content Module 1: Introduction to Operating Systems Functionalities of operating system - hardware/software interface. Evolution of operating systems - batch, multi-programmed, time- sharing, real-time, distributed. Simultaneous Peripheral Operations On-Line (SPOOL). Protection and Security - user/supervisory mode, privileged instructions, system calls (invoking OS services). Contact Hours : 4L 5 What is an Operating System? 6 What is an Operating System? A program that acts as an intermediary between a user of a computer and the computer hardware. An operating System is a collection of system programs that together control the operations of a computer system. Some examples of operating systems are UNIX, Mach, MS-DOS, MS-Windows, Windows/NT, Chicago, OS/2, MacOS, VMS, MVS, and VM. 7 What is an Operating System? Fig : Block diagram of Operating System 8 Example of Operating Systems Fig : Example of some popular Operating Systems 9 Computer System Components Hardware – provides basic computing resources - CPU, memory, I/O devices Operating system – controls and coordinates the use of the hardware among the various application programs for the various users Applications programs – Define the ways in which the system resources are used to solve the computing problems of the users compilers, database systems, video games, business programs Users - people, machines, other computers 10 Computer System Components Fig : Computer System Components 11 Computer System Components Operating systems can be explored from two viewpoints: the user and the system User View: From the user’s point view, the OS is designed for one user to monopolize its resources, to maximize the work that the user is performing and for ease of use. System View: From the computer's point of view, an operating system is a control program that manages the execution of user programs to prevent errors and improper use of the computer. It is concerned with the operation and control of I/O devices. 12 Computer System Components Fig : Computer System Components 13 Operating System Definitions Resource allocator – manages and allocates resources Control program – controls the execution of user programs and operations of I/O devices Kernel – The one program running at all times (all else being application programs) 14 Kernel and Shell Components of OS: OS has two parts. (1) Kernel (2) Shell (1) Kernel is an active part of an OS i.e., it is the part of OS running at all times. It is a programs which can interact with the hardware. Ex: Device driver, dll files, system files etc. (2) Shell is called as the command interpreter. It is a set of programs used to interact with the application programs. It is responsible for execution of instructions given to OS (called commands). 15 Kernel and Shell Fig : Kernel and Shell 16 Evolution of Operating System 17 Mainframe Systems Reduce setup time by batching similar jobs Automatic job sequencing – automatically transfers control from one job to another. First rudimentary operating system. Resident monitor initial control in monitor control transfers to job when job completes control transfers pack to monitor 18 Mainframe Systems Fig : Mainframe Systems 19 Batch Processing Operating System This type of OS accepts more than one jobs and these jobs are batched/ grouped together according to their similar requirements. This is done by computer operator. Whenever the computer becomes available, the batched jobs are sent for execution and gradually the output is sent back to the user. It allowed only one program at a time This OS is responsible for scheduling the jobs according to priority and the resource required 20 Batch Processing Operating System Fig : Batch Processing Operating System 21 Multiprogramming Operating System This type of OS is used to execute more than one jobs simultaneously by a single processor. it increases CPU utilization by organizing jobs so that the CPU always has one job to execute. The concept of multiprogramming is described as follows: All the jobs that enter the system are stored in the job pool ( in disc). The operating system loads a set of jobs from job pool into main memory and begins to execute During execution, the job may have to wait for some task, such as an I/O operation, to complete. In a multiprogramming system, the operating system simply switches to another job and executes 22 Multiprogramming Operating System (contd.) When that job needs to wait, the CPU is switched to another job, and so on When the first job finishes waiting and it gets the CPU back As long as at least one job needs to execute, the CPU is never idle Multiprogramming operating systems use the mechanism of job scheduling and CPU scheduling 23 Multiprogramming Operating System (contd.) Fig : Multiprogramming Operating System 24 Time-Sharing/multitasking Operating Systems Time sharing (or multitasking) OS is a logical extension of multiprogramming. It provides extra facilities such as: Faster switching between multiple jobs to make processing faster Allows multiple users to share computer system simultaneously The users can interact with each job while it is running 25 Time-Sharing/multitasking Operating Systems (contd.) These systems use a concept of virtual memory for effective utilization of memory space. Hence, in this OS, no jobs are discarded. Each one is executed using virtual memory concept. It uses CPU scheduling, memory management, disc management and security management. Examples: CTSS, MULTICS, CAL, UNIX etc. 26 Time-Sharing/multitasking Operating Systems (contd.) Fig : Time-Sharing/multitasking Operating Systems 27 Multiprocessor Operating Systems Multiprocessor operating systems are also known as parallel OS or tightly coupled OS. Such operating systems have more than one processor in close communication that sharing the computer bus, the clock and sometimes memory and peripheral devices. It executes multiple jobs at same time and makes the processing faster. 28 Multiprocessor Operating Systems (contd.) Fig : Multiprocessor Operating Systems 29 Multiprocessor Operating Systems (contd.) Multiprocessor systems have three main advantages: Increased throughput: By increasing the number of processors, the system performs more work in less time. The speed-up ratio with N processors is less than N. Economy of scale: Multiprocessor systems can save more money than multiple single-processor systems, because they can share peripherals, mass storage, and power supplies. 30 Multiprocessor Operating Systems (contd.) Increased reliability: If one processor fails to done its task, then each of the remaining processors must pick up a share of the work of the failed processor. The failure of one processor will not halt the system, only slow it down. The ability to continue providing service proportional to the level of surviving hardware is called graceful degradation. Systems designed for graceful degradation are called fault tolerant. 31 Multiprocessor Operating Systems (contd.) The multiprocessor operating systems are classified into two categories: 1. Symmetric multiprocessing system 2. Asymmetric multiprocessing system In symmetric multiprocessing system, each processor runs an identical copy of the operating system, and these copies communicate with one another as needed. 32 Multiprocessor Operating Systems (contd.) The multiprocessor operating systems are classified into two categories: 1. Symmetric multiprocessing system 2. Asymmetric multiprocessing system In asymmetric multiprocessing system, a processor is called master processor that controls other processors called slave processor. Thus, it establishes master-slave relationship. The master processor schedules the jobs and manages the memory for entire system. 33 Multiprocessor Operating Systems (contd.) Fig : Symmetric Vs Asymmetric Multiprocessing Operating Systems 34 Distributed Operating Systems In distributed system, the different machines are connected in a network and each machine has its own processor and own local memory. In this system, the operating systems on all the machines work together to manage the collective network resource. 35 Distributed Operating Systems (contd.) Fig : Distributed Operating Systems 36 Distributed Operating Systems (contd.) It can be classified into two categories: 1. Client-Server systems 2. Peer-to-Peer systems Fig : Client-Server Systems Fig : Peer-to-Peer Systems 37 Distributed Operating Systems (contd.) It can be classified into two categories: 1. Client-Server systems 2. Peer-to-Peer systems Advantages of distributed systems: – Resources Sharing – Computation speed up – load sharing – Reliability – Communications – Requires networking infrastructure. – Local area networks (LAN) or Wide area networks (WAN) 38 Real-Time Operating System (RTOS) A real-time operating system (RTOS) is a multitasking operating system intended for applications with fixed deadlines (real-time computing). Such applications include some small embedded systems, automobile engine controllers, industrial robots, spacecraft, industrial control, and some large-scale computing systems. The real time operating system can be classified into two categories: 1. Hard real time system and 2. Soft real time system 39 Real-Time Operating System (RTOS) (contd.) A hard real-time system guarantees that critical tasks be completed on time. This goal requires that all delays in the system be bounded, from the retrieval of stored data to the time that it takes the operating system to finish any request made of it. Such time constraints dictate the facilities that are available in hard real-time systems. A soft real-time system is a less restrictive type of real-time system. Here, a critical real-time task gets priority over other tasks and retains that priority until it completes. Soft real time system can be mixed with other types of systems. Due to less restriction, they are risky to use for industrial control and robotics. 40 Real-Time Operating System (RTOS) (contd.) Fig : Real-Time Operating System 41 Simultaneous Peripheral Operations On-Line (SPOOL) Spooling is a process in which data is temporarily held to be used and executed by a device, program, or system. Data is sent to and stored in memory or other volatile storage until the program or computer requests it for execution. SPOOL is an acronym for simultaneous peripheral operations online. Generally, the spool is maintained on the computer's physical memory, buffers, or the I/O device-specific interrupts. The spool is processed in ascending order, working based on a FIFO (first-in, first-out) algorithm. 42 Simultaneous Peripheral Operations On-Line (SPOOL) (contd.) Spooling refers to putting data of various I/O jobs in a buffer. This buffer is a special area in memory or hard disk which is accessible to I/O devices. An operating system does the following activities related to the distributed environment: Handles I/O device data spooling as devices have different data access rates. Maintains the spooling buffer, which provides a waiting station where data can rest while the slower device catches up. Maintains parallel computation because of the spooling process as a computer can perform I/O in parallel order. It becomes possible to have the computer read data from a tape, write data to disk, and write out to a tape printer while it is doing its computing task. 43 How Spooling Works in Operating System In an operating system, spooling works in the following steps, such as: Spooling involves creating a buffer called SPOOL, which is used to hold off jobs and data till the device in which the SPOOL is created is ready to make use and execute that job or operate on the data. When a faster device sends data to a slower device to perform some operation, it uses any secondary memory attached as a SPOOL buffer. This data is kept in the SPOOL until the slower device is ready to operate on this data. When the slower device is ready, then the data in the SPOOL is loaded onto the main memory for the required operations. 44 How Spooling Works in Operating System In an operating system, spooling works in the following steps, such as: Spooling considers the entire secondary memory as a huge buffer that can store many jobs and data for many operations. The advantage of Spooling is that it can create a queue of jobs that execute in FIFO order to execute the jobs one by one. A device can connect to many input devices, which may require some operation on their data. So, all of these input devices may put their data onto the secondary memory (SPOOL), which can then be executed one by one by the device. This will make sure that the CPU is not idle at any time. So, we can say that Spooling is a combination of buffering and queuing. 45 How Spooling Works in Operating System In an operating system, spooling works in the following steps, such as: After the CPU generates some output, this output is first saved in the main memory. This output is transferred to the secondary memory from the main memory, and from there, the output is sent to the respective output devices. Fig : Simultaneous Peripheral Operations On-Line (SPOOL) 46 Example of Spooling The biggest example of Spooling is printing. The documents which are to be printed are stored in the SPOOL and then added to the queue for printing. During this time, many processes can perform their operations and use the CPU without waiting while the printer executes the printing process on the documents one-by-one. Many features can also be added to the Spooling printing process, like setting priorities or notification when the printing process has been completed or selecting the different types of paper to print on according to the user's choice. 47 Example of Spooling (contd.) Fig : Example of Spooling 48 Difference between Spooling and Buffering Fig : Difference between Spooling and Buffering 49 Protection and Security 50 Dual Mode Operations in Operating System The dual-mode operations in the operating system protect the operating system from illegal users. We accomplish this defense by designating some of the system instructions as privileged instructions that can cause harm. The hardware only allows for the execution of privileged instructions in kernel mode. An example of a privileged instruction is the command to switch to user mode. Other examples include monitoring of I/O, controlling timers and handling interruptions. 51 Dual Mode Operations in Operating System (contd.) To ensure proper operating system execution, we must differentiate between machine code execution and user-defined code. Most computer systems have embraced offering hardware support that helps distinguish between different execution modes. We have two modes of the operating system: user mode and kernel mode. Mode bit is required to identify in which particular mode the current instruction is executing. If the mode bit is 1, it operates user mode, and if the mode bit is 0, it operates in kernel mode. Types of Dual Mode in Operating System The operating system has two modes of operation to ensure it works correctly: user mode and kernel mode. 52 Supervisor Mode (Privileged Mode) Supervisor mode or privileged mode is a computer system mode in which all instructions such as privileged instructions can be performed by the processor. Some of these privileged instructions are interrupt instructions, input output management etc. The kernel is the most privileged part of the computer system. There are some privileged instructions that can only be executed in kernel mode or supervisor mode. The privilege reduces for device drivers and applications respectively. Fig : Supervisor Mode (Privileged Mode) 53 Supervisor Mode (Privileged Mode) (contd.) Supervisor mode or privileged mode is a computer system mode in which all instructions such as privileged instructions can be performed by the processor. Some of these privileged instructions are interrupt instructions, input output management etc. The kernel is the most privileged part of the computer system. There are some privileged instructions that can only be executed in kernel mode or supervisor mode. The privilege reduces for device drivers and applications respectively. Fig : Supervisor Mode (Privileged Mode) 54 User Mode When the computer system runs user applications like file creation or any other application program in the User Mode, this mode does not have direct access to the computer's hardware. For performing hardware related tasks, like when the user application requests for a service from the operating system or some interrupt occurs, in these cases, the system must switch to the Kernel Mode. The mode bit of the user mode is 1. This means that if the mode bit of the system's processor is 1, then the system will be in the User Mode. 55 User Mode (contd.) Fig : User Mode 56 Kernel Mode All the bottom level tasks of the Operating system are performed in the Kernel Mode. As the Kernel space has direct access to the hardware of the system, so the kernel-mode handles all the processes which require hardware support. Apart from this, the main functionality of the Kernel Mode is to execute privileged instructions. These privileged instructions are not provided with user access, and that's why these instructions cannot be processed in the User mode. So, all the processes and instructions that the user is restricted to interfere with are executed in the Kernel Mode of the Operating System. The mode bit for the Kernel Mode is 0. So, for the system to function in the Kernel Mode, the Mode bit of the processor must be equal to 0. 57 Kernel Mode (contd.) Fig : Kernel Mode 58 Example of Kernel & User Mode Fig : Example of Kernel & User Mode 59 Example of Kernel & User Mode (contd.) When the computer system executes on behalf of a user application, the system is in user mode. However, when a user application requests a service from the operating system via a system call, it must transition from user to kernel mode to fulfill the request. As we can say, this architectural enhancement is useful for many other aspects of system operation. At system boot time, the hardware starts in kernel mode. The operating system is then loaded and starts user applications in user mode. Whenever a trap or interrupt occurs, the hardware switches from user mode to kernel mode, changing the mode bit's state to 0. Thus, whenever the operating system gains control of the computer, it is in kernel mode. The system always switches to user mode by setting the mode bit to 1 before passing control to a user program. 60 User Mode and Kernel Mode Switching In its life span, a process executes in user mode and kernel mode. The user mode is a normal mode where the process has limited access. However, the kernel-mode is the privileged mode where the process has unrestricted access to system resources like hardware, memory, etc. A process can access services like hardware I/O by executing accessing kernel data in kernel mode. Anything related to process management, I/O hardware management, and memory management requires a process to execute in Kernel mode. 61 User Mode and Kernel Mode Switching (contd.) This is important to know that a process in Kernel mode get power to access any device and memory, and same time any crash in kernel mode brings down the whole system. But any crash in user mode brings down the faulty process only. The kernel provides System Call Interface (SCI), which are entry points for user processes to enter kernel mode. System calls are the only way through which a process can go into kernel mode from user mode. The below diagram explains user mode to kernel mode switching in detail. 62 User Mode and Kernel Mode Switching (contd.) Fig : User Mode and Kernel Mode Switching 63 User Mode and Kernel Mode Switching (contd.) When in user mode, the application process makes a call to Glibc, which is a library used by software programmers. Glibc library knows the proper way of calling System Call for different architectures. It set up passing arguments as per architecture's Application Binary Interface (ABI) to prepare for System Call entry. Now Glibc calls Software Interrupt instruction for ARM, which puts the processor into Supervisor mode by updating Mode bits of CPSR register and jumps to vector address 0x08. Till now, process execution was in User mode. After SWI instruction execution, the process is allowed to execute kernel code. 64 User Mode and Kernel Mode Switching (contd.) Memory Management Unit (MMU) will now allow kernel Virtual memory access and execution for this process. From Vector address 0x08, process execution loads and jumps to SW Interrupt handler routine, vector_swi()for ARM. In vector_swi(), System Call Number (SCNO) is extracted from SWI instruction, and execution jumps to system call function using SCNO as an index in system call table sys_call_table. After System Call execution, in the return path, userspace registers are restored before starting execution in User Mode. 65 User Mode and Kernel Mode Switching (contd.) There are two main reasons behind the switching between User mode and kernel mode, such as: 1. If everything were to run in a single-mode, we would end up with Microsoft's issue in the earlier versions of Windows. If a process were able to exploit a vulnerability, that process then could control the system. 2. Certain conditions are known as a trap, an exception or a system fault typically caused by an exceptional condition such as division by zero, invalid memory access, etc. If the process is running in kernel mode, such a trap situation can crash the entire operating system. A process in user mode that encounters a trap situation only crashes the user-mode process. So, the overhead of switching is acceptable to ensure a more stable, secure system. 66 Difference between User Mode and Kernel Mode A computer operates either in user mode or kernel mode. The difference between User Mode and Kernel Mode is that user mode is the restricted mode in which the applications are running, and kernel-mode is the privileged mode the computer enters when accessing hardware resources. The computer is switching between these two modes. Frequent context switching can slow down the speed, but it is impossible to execute all processes in the kernel mode. That is because; if one process fails, the whole operating system might fail. Below are some more differences between User mode and kernel mode, such as: 67 Difference between User Mode and Kernel Mode (contd.) Terms User Mode Kernel Mode Definition User Mode is a restricted mode, Kernel Mode is the privileged mode, which the application programs are which the computer enters when executing and starts. accessing hardware resources. Modes User Mode is considered as the slave Kernel mode is the system mode, mode or the restricted mode. master mode or the privileged mode. Address Space In User mode, a process gets its own In Kernel Mode, processes get a single address space. address space. Interruptions In User Mode, if an interrupt occurs, In Kernel Mode, if an interrupt occurs, only one process fails. the whole operating system might fail. Restrictions In user mode, there are restrictions In kernel mode, both user programs to access kernel programs. Cannot and kernel programs can access. access them directly. 68 Operating System Services Following are the five services provided by operating systems to the convenience of the users. 1. Program Execution The purpose of computer systems is to allow the user to execute programs. So the operating system provides an environment where the user can conveniently run programs. Running a program involves the allocating and deallocating memory, CPU scheduling in case of multiprocessing. 2. I/O Operations Each program requires an input and produces output. This involves the use of I/O. So the operating systems are providing I/O makes it convenient for the users to run programs. 69 Operating System Services (contd.) 3. File System Manipulation The output of a program may need to be written into new files or input taken from some files. The operating system provides this service. 4. Communications The processes need to communicate with each other to exchange information during execution. It may be between processes running on the same computer or running on the different computers. Communications can be occur in two ways: (i) shared memory or (ii) message passing 5. Error Detection An error is one part of the system may cause malfunctioning of the complete system. To avoid such a situation operating system constantly monitors the system for detecting the errors. This relieves the user of the worry of errors propagating to various part of the system and causing malfunctioning. 70 Operating System Services Following are the three services provided by operating systems for ensuring the efficient operation of the system itself. 1. Resource allocation When multiple users are logged on the system or multiple jobs are running at the same time, resources must be allocated to each of them. Many different types of resources are managed by the operating system. 2. Accounting The operating systems keep track of which users use how many and which kinds of computer resources. This record keeping may be used for accounting (so that users can be billed) or simply for accumulating usage statistics. 71 Operating System Services (contd.) 3. Protection When several disjointed processes execute concurrently, it should not be possible for one process to interfere with the others, or with the operating system itself. Protection involves ensuring that all access to system resources is controlled. Security of the system from outsiders is also important. Such security starts with each user having to authenticate him to the system, usually by means of a password, to be allowed access to the resources. 72 Instructions An instruction is an order or a command given to the system by an application. Instructions in the Operating System are divided into two categories- privileged instructions and non-privileged instructions. The software part can be divided into two categories based on the instructions: Kernel - Can execute only privileged instructions Application - Can execute both privileged and non-privileged instructions Fig : Instructions 73 Privileged Instructions Privileged instructions are the instructions that are only executed in kernel mode. If a privileged instruction is attempted to get executed in user mode, that instruction will get ignored and treated as an illegal instruction. It is trapped in the operating system by the hardware. It is the responsibility of the operating system to ensure that the Timer is set to interrupt before transferring control to any user application. As a result, the operating system can regain control if the timer is interrupted. The operating system uses privileged instruction to ensure proper operation. Examples of privileged instructions I/O instructions Context switching Clear memory Set the timer of the CPU Halt instructions Interrupt management Modify entries in the Device-status table 74 Non-Privileged Instructions Non-Privileged instructions are the instructions that are only executed in user mode. Examples of Non-privileged instructions Generate trap instruction Reading system time Reading status of processor Sending the output to the printer Performing arithmetic operations The instruction set is divided into two categories: Privileged instruction - These instructions must be used wisely as their misuse can harm the system. Non-privileged instruction - These are normal instructions. 75 Difference between Privileged and Non-Privileged Instructions Sl. No Privileged instruction Non-privileged instruction 1., These instructions are only These instructions are only executed executed in kernel mode. in user mode. 2. These instructions get executed These instructions get executed under specific restrictions and are without interfering with other tasks mostly used for sensitive because they do not share any operations. resources. 3. Examples of privileged Examples of non-privileged instructions are - I/O instructions, instructions are - generate trap context switching, clear memory, instruction, reading system time, the etc. reading status of the processor, etc. 76 System Calls System calls provide an interface between the process and the operating system. System calls allow user-level processes to request some services from the operating system which process itself is not allowed to do. For example, for I/O a process involves a system call telling the operating system to read or write particular area and this request is satisfied by the operating system. 77 System Calls (contd.) Process control File management end, abort create file, delete file load, execute open, close create process, terminate process read, write, reposition get process attributes, set process get file attributes, set file attributes attributes wait for time wait event, signal event allocate and free memory Device management request device, release device read, write, reposition get device attributes, set device attributes logically attach or detach devices 78 System Calls (contd.) Information maintenance get time or date, set time or date get system data, set system data get process, file, or device attributes set process, file, or device attributes Communications create, delete communication connection send, receive messages transfer status information attach or detach remote devices 79 Layered Structure of Operating System The operating system can be implemented with the help of various structures. The structure of the OS depends mainly on how the various common components of the operating system are interconnected and melded into the kernel. Depending on this, we have to follow the structures of the operating system. The layered structure approach breaks up the operating system into different layers and retains much more control on the system. The bottom layer (layer 0) is the hardware, and the topmost layer (layer N) is the user interface. These layers are so designed that each layer uses the functions of the lower-level layers only. It simplifies the debugging process as if lower-level layers are debugged, and an error occurs during debugging. The error must be on that layer only as the lower-level layers have already been debugged. 80 Layered Structure of Operating System (contd.) Fig : Layered Structure of Operating System 81 Layered Structure of Operating System (contd.) This allows implementers to change the inner workings and increases modularity. As long as the external interface of the routines doesn't change, developers have more freedom to change the inner workings of the routines. The main advantage is the simplicity of construction and debugging. The main difficulty is defining the various layers. The main disadvantage of this structure is that the data needs to be modified and passed on at each layer, which adds overhead to the system. Moreover, careful planning of the layers is necessary as a layer can use only lower-level layers. UNIX is an example of this structure. 82 Architecture of Layered Structure This type of operating system was created as an improvement over the early monolithic systems. The operating system is split into various layers in the layered operating system, and each of the layers has different functionalities. There are some rules in the implementation of the layers as follows. A particular layer can access all the layers present below it, but it cannot access them. That is, layer n-1 can access all the layers from n-2 to 0, but it cannot access the nth Layer 0 deals with allocating the processes, switching between processes when interruptions occur or the timer expires. It also deals with the basic multiprogramming of the CPU. Thus if the user layer wants to interact with the hardware layer, the response will be traveled through all the layers from n-1 to 1. Each layer must be designed and implemented such that it will need only the services provided by the layers below it. 83 Architecture of Layered Structure (contd.) Fig : Architecture of Layered Structure 84 Architecture of Layered Structure (contd.) 1. Hardware: This layer interacts with the system hardware and coordinates with all the peripheral devices used, such as a printer, mouse, keyboard, scanner, etc. These types of hardware devices are managed in the hardware layer. The hardware layer is the lowest and most authoritative layer in the layered operating system architecture. It is attached directly to the core of the system. 2. CPU Scheduling: This layer deals with scheduling the processes for the CPU. Many scheduling queues are used to handle processes. When the processes enter the system, they are put into the job queue. The processes that are ready to execute in the main memory are kept in the ready queue. This layer is responsible for managing how many processes will be allocated to the CPU and how many will stay out of the CPU. 85 Architecture of Layered Structure (contd.) 3. Memory Management: Memory management deals with memory and moving processes from disk to primary memory for execution and back again. This is handled by the third layer of the operating system. All memory management is associated with this layer. There are various types of memories in the computer like RAM, ROM. If you consider RAM, then it is concerned with swapping in and swapping out of memory. When our computer runs, some processes move to the main memory (RAM) for execution, and when programs, such as calculator, exit, it is removed from the main memory. 4. Process Management: This layer is responsible for managing the processes, i.e., assigning the processor to a process and deciding how many processes will stay in the waiting schedule. The priority of the processes is also managed in this layer. The different algorithms used for process scheduling are FCFS (first come, first served), SJF (shortest job first), priority scheduling, round-robin scheduling, etc. 86 Architecture of Layered Structure (contd.) 5. I/O Buffer: I/O devices are very important in computer systems. They provide users with the means of interacting with the system. This layer handles the buffers for the I/O devices and makes sure that they work correctly. Suppose you are typing from the keyboard. There is a keyboard buffer attached with the keyboard, which stores data for a temporary time. Similarly, all input/output devices have some buffer attached to them. This is because the input/output devices have slow processing or storing speed. The computer uses buffers to maintain the good timing speed of the processor and input/output devices. 6. User Programs: This is the highest layer in the layered operating system. This layer deals with the many user programs and applications that run in an operating system, such as word processors, games, browsers, etc. You can also call this an application layer because it is concerned with application programs. 87 Advantages of Layered Structure There are several advantages of the layered structure of operating system design, such as: 1. Modularity: This design promotes modularity as each layer performs only the tasks it is scheduled to perform. 2. Easy debugging: As the layers are discrete so it is very easy to debug. Suppose an error occurs in the CPU scheduling layer. The developer can only search that particular layer to debug, unlike the Monolithic system where all the services are present. 3. Easy update: A modification made in a particular layer will not affect the other layers. 4. No direct access to hardware: The hardware layer is the innermost layer present in the design. So a user can use the services of hardware but cannot directly modify or access it, unlike the Simple system in which the user had direct access to the hardware. 5. Abstraction: Every layer is concerned with its functions. So the functions and implementations of the other layers are abstract to it. 88 Disadvantages of Layered Structure Though this system has several advantages over the Monolithic and Simple design, there are also some disadvantages, such as: 1. Complex and careful implementation: As a layer can access the services of the layers below it, so the arrangement of the layers must be done carefully. For example, the backing storage layer uses the services of the memory management layer. So it must be kept below the memory management layer. Thus with great modularity comes complex implementation. 2. Slower in execution: If a layer wants to interact with another layer, it requests to travel through all the layers present between the two interacting layers. Thus it increases response time, unlike the Monolithic system, which is faster than this. Thus an increase in the number of layers may lead to a very inefficient design. 3. Functionality: It is not always possible to divide the functionalities. Many times, they are interrelated and can't be separated. 4. Communication: No communication between non-adjacent layers. 89 Thank you 90