COS reviewer.pdf
Document Details
Uploaded by Deleted User
Full Transcript
CONCEPTS OF OPERATING SYSTEM allocation ensures fair sharing among multiple users and applications. Operating System as a Platform...
CONCEPTS OF OPERATING SYSTEM allocation ensures fair sharing among multiple users and applications. Operating System as a Platform Program Control Basic Concept of OS The OS controls the execution of programs to people, machines, prevent errors and improper use of the computer. other computer. It ensures that processes run in a safe and orderly manner. Examples: Process creation, termination, define ways in which the system resources are and synchronization. used to solve the computing problems of the users. Examples: word processors, compilers, “OS Goal is CONVENIENCE & EFFICIENCY” web browsers, database system, video games Computer System Operations controls and coordinates use A computer system consists of several components of the hardware among that work together to perform tasks efficiently. The various application users main components involved in I/O operations are: CPU (Central Processing Unit): Executes provides basic computing resources such instructions and performs calculations. as CPU, memory, I/O devices, storage, Main Memory: Stores data and program network resources. instructions. Device Controllers: Manage specific devices, such as hard drives, keyboards, and displays. Local Buffers: Small memory areas within device controllers that temporarily store data. System Bus: A shared communication pathway In this part mahihirapan or that connects the CPU, main memory, and device mabagal mag function yung controllers. computer since walang operating system. OPERATING SYSTEM, a program that acts as an intermediary between a user of a computer and the computer hardware. It acts as resource (CPU ang kukuha ng task from user and then allocator and program controller. ieexecute nya to collect data sa mga controller or Resource Allocator adapter then ibabalik kay CPU then pupunta kay The OS manages and allocates various resources, memory) including: Local Buffer Memory: Allocates memory space for programs A buffer is a region of memory used to hold data and data. temporarily while it is being moved from one place CPU: Schedules processes to run on the CPU. to another. In the context of device controllers, a I/O Devices: Controls access to devices like disks, local buffer is used to: printers, and network interfaces. Efficient resource Store data being transferred from the device to I/O Input and Output Operations main memory Device DriverInitialization: When an application or Store data being transferred from main memory the operating system wants to perform an I/O to the device operation, it interacts with the appropriate device Act as a staging area for data being processed by driver. The device driver is a software component the device controller that acts as an intermediary between the System Bus application/OS and the hardware device. The system bus is a shared communication Loading Appropriate Registers: To start an I/O pathway that connects the CPU, main memory, and operation, the device driver first loads specific device controllers. It allows data to be transferred registers within the device controller associated between these components. The system bus with the target device. These registers hold consists of: configuration information, control flags, memory Address bus: Carries memory addresses and addresses, and other relevant details. control signals Data bus: Carries data being transferred between Device Controller Examination: The device components controller (also known as the I/O controller or Control bus: Carries control signals, such as adaptor) examines the contents of these registers. interrupts and clock signals Based on the information provided by the device driver, the controller determines what action it BOOTSTRAP PROGRAM needs to take. The initial program that runs when a computer is powered up or rebooted. Data Transfer Typically stored in ROM or EPROM, generally known With the device prepared, the device controller as firmware. starts the transfer of data from the device to its Loads operating system kernel and starts execution. local buffer. The local buffer is a temporary storage INTERRUPTS area within the device controller that holds the The occurrence of an event is usually signaled by data until it can be transferred to the operating an Interrupt form hardware or software. It is a system. signal emitted by either hardware or software Interrupt Notification when an event or process requires immediate Once the data transfer is complete, the device attention from the processor. controller informs the device driver via an interrupt Hardware my trigger an interrupt at any time by that the operation is finished. An interrupt is a sending signal to the CPU. signal to the device driver that the I/O operation is Software may trigger an interrupts by executing a complete and that the data is ready to be special operation called System Call. transferred to the operating system. Control Returned to Operating System The device driver, upon receiving the interrupt, retrieves the data from the local buffer and transfers it to the operating system. The operating system can then access the data and perform any necessary processing or storage. Types of I/O operations Synchronous I/O - the operating system waits for the I/O operation to complete before continuing execution. This can lead to slower system manage files. performance, as the operating system is blocked File Access Control: The OS regulates access to waiting for the I/O operation to complete. files, ensuring that only authorized users can read, write, or execute files Asynchronous I/O - the operating system continues executing other tasks while the I/O Input/Output (I/O) Management operation is in progress. I/O Device Management: The OS manages input/output devices, such as keyboards, displays, Direct Memory Access (DMA) - It is a technique and printers. used to transfer data between devices and memory I/O Operations: The OS performs I/O operations, without involving the CPU. This can improve such as reading and writing data to devices. system performance, as the CPU is not involved in I/O Device Management: The OS manages the data transfer process input/output devices, such as keyboards, displays, and printers. I/O Operations: The OS performs I/O operations, such as reading and writing data to devices. Security and Protection User Authentication: The OS verifies user identities, ensuring that only authorized users can access system resources. Access Control: The OS enforces access control mechanisms, such as permissions and access lists, to protect system resources. FUNCTIONS OF AN OS KINDS OF OPERATING SYSTEM Process Management Process Creation and Termination: The OS creates 1. Single-User, Single-Tasking OS and manages processes, which are programs in These OS are designed for a single user and can execution. It allocates resources, such as memory only execute one program at a time. This means and CPU time, to each process. that the user has to close one program before Process Scheduling: The OS schedules processes to opening another. optimize system performance, ensuring efficient Examples of single-user, single-tasking OS include: use of resources. a. MS-DOS: Developed by Microsoft, MS-DOS was Process Synchronization: The OS coordinates the one of the earliest popular OS for personal activities of multiple processes to prevent conflicts computers. It was a command-line based OS that and ensure data consistency required users to type commands to perform tasks. a. Early versions of Windows: Windows 1.0 and 2.0 Memory Management were single-user, single tasking OS that were Memory Allocation: The OS manages memory designed for personal computers allocation and deallocation for running programs. Memory Protection: The OS ensures that each The advantages of single-user, single-tasking OS include: process runs in its own memory space, preventing Simple design and implementation unauthorized access to other processes' memory. Low system requirements Easy to use and manage File Management However, these OS have several limitations, including: File Creation and Deletion: The OS provides a file Limited multitasking capabilities system, allowing users to create, delete, and Limited user interaction copied from physical storage. Limited hardware support Physical Storage: holds a large amount of data, 2. Single-User, Multi-Tasking OS determines how many programs can run at once. These OS allow a single user to access the Kernel: Embedded in main memory, interacts with computer, but can execute multiple programs hardware, written in a low-level language. simultaneously. This means that the user can have Processor: Central Processing Unit (CPU), the core multiple windows open at the same time, of the computer. Device Handler: Manages device improving productivity and efficiency. requests, operates in a continuous cycle, follows Examples of single-user, multi-tasking OS include: the First-In-First-Out (FIFO) principle. Modern versions of Windows: Windows 95, 98, Spooler: Manages simultaneous peripheral output, and XP are examples of single user, multi-tasking runs computer processes, and outputs results. OS that support multitasking and have a graphical User Interface: Creates a simple environment for user interface. users to interact with the computer system, macOS: Developed by Apple, macOS is a single- provides communication between users and user, multi-tasking OS that is designed for personal hardware/software. computers. Types of Multi-User Operating System The multi- Linux: Linux is an open-source OS that can be user operating systems is of the following types: configured as a single-user, multi-tasking OS. Distributed System The advantages of single-user, multi-tasking OS include: Time sliced system Improved multitasking capabilities Multiprocessor system Improved user interaction 4. Multi-Tasking OS Support for multiple hardware devices These OS can execute multiple programs However, these OS have several limitations, including: simultaneously, improving system efficiency and Increased system requirements productivity. Increased complexity Examples of multi-tasking Steeper learning curve OS include: Modern versions of Windows: 3. Multi-User OS Windows 10, 8, and 7 are examples of These OS allow multiple users to access the multi tasking OS that support multitasking and computer simultaneously, with each user having have a graphical user interface. their own account and privileges. macOS: Developed by Apple, macOS is a multi- Examples of multi-user OS include: tasking OS that is designed for personal computers. Unix: Developed in the 1970s, Unix is a multi-user Linux: Linux is an open-source OS that can be OS that is designed for mainframe computers and configured as a multi-tasking OS. servers. The advantages of multi-tasking OS include: Linux: Linux is an open-source OS that can be Improved multitasking capabilities configured as a multi user OS. Improved system efficiency Windows Server: Developed by Microsoft, Support for multiple hardware devices Windows Server is a multi-user OS that is designed for servers and networks However, these OS have several limitations, including: Components of a Multi-User Operating System Increased system requirements Memory: Main memory (RAM) determines how Increased complexity many programs can run simultaneously. Data is Overloading the main memory (RAM) can lead to corrected in main memory, and programs are memory constraints. 5. Real-Time OS systems. These OS are designed to process data in real-time, Manages a group of computers connected via a with predictable and fast responses. network Examples of real-time OS include: Provides a single, integrated system VxWorks: Developed by Wind River Systems, Improved system scalability VxWorks is a real-time OS that is designed for Improved system reliability embedded systems and industrial control systems. Support for large-scale systems QNX: Developed by BlackBerry, QNX is a real-time EVOLUTION OF OPERATING SYSTEM OS that is designed for automotive and industrial control systems. The Early Years (1950s-1960s) The first computers, such as UNIVAC I and IBM 701, The advantages of real-time OS include: used machine language and were operated Predictable and fast responses manually. The concept of an operating system as Improved system reliability – error free we understand it today did not exist. With the Support for critical systems advent of batch processing, the need for a system Maximum use of devices and systems, resulting to manage jobs (programs) and resources arose. in more output from resources. CTSS (Compatible Time-Sharing System): However, these OS have several limitations, Developed in 1961 at MIT, CTSS was the first including: operating system to support time-sharing, allowing Few tasks can run simultaneously, with a focus on multiple users to access the computer few applications to avoid errors. simultaneously. These systems can be expensive and require Atlas: Developed in 1962 at the University of heavy system resources. Manchester, Atlas was a batch operating system Algorithms are complex and difficult to design. that introduced the concept of a kernel, which Requires specific device drivers and interrupt managed the computer's resources. signals to respond quickly to interrupts. Setting thread priority can be challenging, and The advantages of early mainframe OS include: task switching is minimal. Improved system efficiency Improved user interaction 6. Mobile OS Support for batch processing These OS are designed for mobile devices, such as smartphones and tablets. Examples of mobile OS However, these OS have several limitations, include: including: Android: Developed by Google, Android is a mobile Limited multitasking capabilities OS that is designed for smartphones and tablets. Limited user interaction iOS: Developed by Apple, iOS is a mobile OS that is Limited hardware support designed for iPhones and iPads. Time-Sharing OS (1960s-1970s) Windows Phone: Developed by Microsoft, The development of time-sharing OS enabled Windows Phone is a mobile OS that is designed for multiple users to access the computer smartphones. simultaneously, with the OS managing resources 7. Distributed OS and scheduling tasks. Examples of time-sharing OS A distributed OS manages a group of computers include: connected via a network, providing a single, Unix: Developed in the 1970s, Unix is a time- integrated system. Examples of distributed OS sharing OS that is designed for mainframe include clusters, grids, and cloud computing computers and servers. Multics: Developed in the 1960s, Multics is a time- These OS used visual icons, windows, and menus to sharing OS that is designed for mainframe interact with the user, rather than command line computers and servers. interfaces. Examples of GUI OS include: Windows: Developed by Microsoft, Windows is a The advantages of time-sharing OS include: widely used GUI OS for personal computers. Improved multitasking capabilities macOS: Developed by Apple, macOS is a GUI OS for Improved user interaction Macintosh computers. Support for multiple users Linux: Developed by Linus Torvalds, Linux is an However, these OS have several limitations, open-source GUI OS for personal computers. including: The advantages of GUI OS include: Increased system requirements Improved user interaction Increased complexity Improved multitasking capabilities Steeper learning curve Support for multiple hardware devices Personal Computer OS (1970s-1980s) However, these OS have several limitations, The advent of personal computers led to the including: development of OS like CP/M, MS-DOS, and Apple Increased system requirements DOS, which were designed for single user, single- Increased complexity tasking environments. These OS were simple, easy Steeper learning curve to use, and had low system requirements. Examples of personal computer OS include: Modern OS (2000s-present) CP/M (Control Program for Microcomputers): Today's OS are designed for multi-core processors, Developed by Digital Research, CP/M was one of cloud computing, and mobile devices, with a focus the first popular OS for personal computers. on security, performance, and user experience. MS-DOS (Microsoft Disk Operating System): These OS use advanced technologies like Developed by Microsoft, MS-DOS was a widely virtualization, containerization, and artificial used OS for personal computers in the 1980s. intelligence to improve system efficiency and Apple DOS: Developed by Apple, Apple DOS was a security. Examples of modern OS include: simple OS for the Apple II computer. EVOLUTION Windows 10: Developed by Microsoft, Windows 10 OF OPERATING SYSTEM Personal Computer OS is a modern OS for personal computers. (1970s-1980s) macOS High Sierra: Developed by Apple, macOS High Sierra is a modern OS for Macintosh The advantages of personal computer OS include: computers. Simple design and implementation Linux distributions: Linux distributions like Ubuntu, Low system requirements Debian, and Fedora are modern OS for personal Easy to use and manage computers. However, these OS have several limitations, GUI stands for GRAPHICAL USER INTERFACE including: Limited multitasking capabilities MSDOS stands for MICROSOFT DISK OPERATING SYSTE Limited user interaction CP/M stands for CONTROL PROGRAM FOR MICROCOMPUTERS Limited hardware support CTSS stands for COMPATIBLE TIME-SHARING SYSTEM Graphical User Interface (GUI) OS (1980s-1990s) FIFO stands for FIRST IN FIRST OUT The introduction of GUI OS like Windows, macOS, and Linux revolutionized user interaction, making computers more accessible and user-friendly. PROCESS MANAGEMENT OPERATING SYSTEM - This means the process is currently being executed by the CPU. PROCESS A process is a program in execution. 4. WAITING; THREAD A thread is the unit of execution with in a - If the process needs to wait for an external event like process. A process can have anywhere from just one I/O completion, it moves into the Waiting state. thread to many threads. - It stays in this state until the event (I/O or another condition) is completed. 5. INTERRUPT/EXIT: - An Interrupt can move a process from the Running state back to the Ready state, where it waits to be scheduled again. - Once the process has finished, it goes to the Exit state, marking it for termination. 6. TERMINATE: - Once a process completes its task, it enters the terminate state, where it is completely removed from memory and no longer exists in the system. PROCESS STATE As a process executes, it changes state. PROCESS CREATION In an operating system, a process can create multiple new processes during its execution. This is achieved through a system call known as create-process. The creating process is referred to as the parent process, and the new processes are called the children of that process. In an operating system, a process is created when a program is executed. This can happen in several ways, such as: Userinteraction- The OS manages how users interact with hardware and software through interfaces like GUIs or CLIs, using system resources to process input and display outputs. System initialization- During boot-up, the OS initializes system resources (CPU, memory, I/O devices) and loads essential components like drivers and services to prepare the system for user tasks. Process spawning- The OS creates new processes 1. NEW: when running applications. It allocates CPU time and - This is the initial state when a process or thread is memory to each process, managing multiple tasks created and waiting to be admitted into the process efficiently through multitasking. scheduler. When a process is created, the operating system 2. READY: performs the following steps: - Once admitted, the process enters the Ready state, Process ID allocation: The operating system assigns a indicating that it is ready to execute but is waiting for unique Process ID (PID) to the new process. the CPU scheduler to dispatch it. Memory allocation: The operating system allocates - It has everything it needs to run except the CPU time. memory for the new process, including space for the 3. RUNNING: program code, data, and stack. - After the scheduler dispatches the process, it moves Context switching: The operating system saves the into the Running state. current state of the parent process and switches to the new process. Mechanisms for Termination Initialization: The operating system initializes the new The operating system provides various mechanisms for process by setting up its memory space, allocating process termination, including: system resources, and loading the program code. System Calls: A process can use system calls like kill() or terminate() to request the termination of another Parent-Child Relationship process. When a parent process creates a child process, a Signals: A process can send a signal to another parent-child relationship is established. The parent process to request termination. Signals are a way for a process is responsible for creating the child process, and process to communicate with another process and can the child process inherits certain attributes from the be used to request termination, suspend, or resume parent process, such as memory space and open files. execution. When a process creates a new process, a parent child Exception Handling: The operating system can relationship is established. The parent process is terminate a process if it encounters an exception or responsible for: error that cannot be handled by the process. Creating the child process When a process terminates, the operating system Providing resources performs several steps to clean up the process's Monitoring the child process resources: Tree of Processes Resource deallocation: The operating system releases Each child process can, in turn, create its own child the resources allocated to the process, such as memory processes, forming a tree of processes. This hierarchical and file descriptors. structure allows for efficient management of processes Context switching: The operating system switches and resources. back to the parent process or another process. PID deallocation: The operating system deallocates PROCESS TERMINATION the Process ID (PID) of the terminated process. In an operating system, process termination can occur in Resource Overhead and Parent-Child Process Termination various circumstances. A process can voluntarily Memory deallocation: When a process terminates, the terminate itself, or it can be terminated by another operating system needs to deallocate the memory process or the operating system. A process can cause the allocated to the process. termination of another process via an appropriate File descriptor closure: When a process terminates, the system call. This system call is usually only accessible to operating system needs to close all the file descriptors the parent process of the process that is to be opened by the process. System call overhead: When a terminated. process terminates, the operating system needs to Reasons for Termination A parent process may perform system calls to deallocate resources, close file terminate the execution of one of its children for a descriptors, and perform other cleanup tasks. variety of reasons, including: Resource Overhead of Zombie Processes Resource Exceedance: The child process has exceeded A zombie process is a process that has finished its allocated resources, such as memory, CPU time, or executing but still has an entry in the process table. The I/O bandwidth. In this case, the parent process must process table is a data structure maintained by the have a mechanism to monitor the state of its children operating system to keep track of all running processes. and take corrective action. Zombie processes can lead to a significant resource Task Completion: The task assigned to the child overhead because they still occupy an entry in the process is no longer required, and the parent process process table and may still hold system resources such can terminate it to free up resources. as memory and file descriptors. Parent Exit: The parent process is exiting, and the operating system does not allow a child process to Resource Overhead of Orphan Processes continue running if its parent terminates. This ensures When a parent process terminates, its child processes that the child process does not become orphaned and become orphan processes. An orphan process is a continue consuming system resources. process that has no parent process. In such cases, the operating system needs to find a new parent process Components of a Process Control Block (PCB) for the orphan process or terminate it. Orphan Process ID (PID): Each process is assigned a unique processes can lead to a significant resource overhead Process ID (PID) that distinguishes it from other because they may still hold system resources such as processes in the system. The PID is used by the OS to memory and file descriptors. identify and manage the process. Resource Overhead and Resource exceedance Process State: The process state indicates the current status of the process: NEW, READY, RUNNING, Resource overhead refers to the additional resources WAITING, and TERMINATED. required to manage and maintain a system, process, or Process Counter: The process counter, also known as application. This can include resources such as memory, the program counter, points to the current instruction CPU cycles, I/O operations, and file descriptors being executed by the process. Resource exceedance it occurs when a system, process, Registers: The PCB stores the contents of the process's or application uses more resources than are available or registers, including the program counter, stack pointer, allocated to it. This can happen when a system or and general purpose registers. This information is process requires more resources than expected, or essential for context switching between processes. when the available resources are insufficient to meet Memory Limits: The PCB specifies the memory limits for the demand. the process, including the base address, limit address, The key differences between resource overhead and and size of the memory allocated to the process. resource exceedance are: List of Open Files The PCB maintains a list of open files Normal vs abnormal: Resource overhead is a normal associated with the process. This information is used to part of system operation, while resource exceedance is manage file descriptors and ensure proper file access an abnormal condition that can lead to system crashes control. or failures. CPU Scheduling Information Optimization vs prevention: Resource overhead can be The PCB may contain CPU scheduling information, such optimized through efficient design and implementation, as: while resource exceedance requires prevention through Priority: The priority of the process, which determines measures such as resource limits, quotas, and load its scheduling order. balancing. Scheduling algorithm: The algorithm used to schedule the process, such as Round Robin, First-Come-First- The key differences between resource overhead and Served, or Priority Scheduling. resource exceedance are: Time slice: The time allocated to the process in each Resource usage: Resource overhead refers to the scheduling round. additional resources required to manage and maintain a Memory ManagementInformation system, while resource exceedance refers to the use of The PCB may include memory management more resources than are available or allocated. information, such as: Example: To illustrate the difference, consider a web Memory allocation: The memory allocated to the server that is designed to handle 100 concurrent process, including the base address, limit address, and requests. The server has a resource overhead of 10% size. due to the need to manage and maintain the system. Page tables: The page tables used to map virtual However, if the server receives 150 concurrent addresses to physical addresses. requests, it may experience resource exceedance due to Segmentation: The segmentation information, the increased demand. In this case, the resource including the segment base, limit, and size. overhead is still present, but the server is now I/O Status Information experiencing resource exceedance due to the The PCB may include I/O status information, such as: unexpected demand. I/O devices: The I/O devices allocated to the process, such as printers, scanners, or network interfaces. I/O requests: The I/O requests pending or in progress for the process. I/O completion: The status of I/O operations, Context Switch Techniques: Context switching is a critical component of process 1. Increase CPU Resources implementation, as it enables the CPU to switch 2. Minimize Context Size between multiple processes efficiently. The context 3. Real-Time Scheduling Techniques switching process involves the following steps: Context Switching Increase CPU Resources 1. Save the current state: The current state of the One way to reduce the overhead of context switching is process (registers, program counter, etc.) is saved in the to increase the CPU resources available. This can be Process Control Block (PCB). achieved through: 2. Restore the new state: The state of the newly Multi-core processors: By using multiple cores, the CPU scheduled process is restored from its PCB. can execute multiple processes simultaneously, 3. Update the PCB: The PCB is updated to reflect the reducing the need for context switching. new process state. Parallel processing: Breaking down a process into 4. Switch to the new process: The CPU context is smaller, independent tasks that can be executed in switched to the newly scheduled process. parallel can reduce the context switching overhead. 3. Update the PCB: The PCB is updated to reflect the Increasing clock speed: A faster clock speed can reduce new process state. the time taken for context switching, making it more 4. Switch to the new process: The CPU context is efficient. Minimizing Context Size switched to the newly scheduled process. Context Switching - Minimize Context Size Context Switching is a key mechanism in Operating Another technique to optimize context switching is to Systems that enables multitasking by allowing multiple minimize the context size. This can be achieved processes to share the CPU. It involves saving the state through: of a currently running process and loading the state of Reducing process size: By minimizing the size of each another process. It also involves the OS saving the process, the amount of data that needs to be saved and current process’s state and loading another’s allowing restored during context switching is reduced. multitasking. Using lazy loading: Loading only the necessary data and How Context Switching Occurs: code for each process can reduce the context size. Multitasking Operation When one process gives way to Optimizing data structures: Using efficient data another so the other one may be carried out. Context structures can reduce the memory footprint of each switching can occur when a task is allotted a limited process, making context switching faster. amount of CPU Time. Context Switching Real-Time Scheduling Techniques Interrupt Context Switching occurs because of a system Real-time scheduling techniques can also help optimize interrupt from a hardware or software component. The context switching by: OS tasks the perfect tool or hardware to solve such Prioritizing tasks: Assigning priority to tasks based on interrupts their urgency and importance can ensure that critical User/Kernel Switching In some instances, context tasks are executed quickly, reducing the need for switching happens when transitioning between user context switching. and kernel mode. A process may need to switch to Using rate monotonic scheduling (RMS): RMS assigns kernel mode to access system resources. priority to tasks based on their period and execution Overhead Associated with Context Switching: time, ensuring that tasks with shorter periods are 1. Time Cost – Each context switch requires time to executed more frequently. save and load process states, which can lead to delays if Implementing earliest deadline first (EDF) scheduling: switches occur to frequently. EDF scheduling assigns priority to tasks based on their 2. Resource Usage – The CPU and Memory Resources deadline, ensuring that tasks with earlier deadlines are are consumed during context switches, which may executed first. detract from the time available for actual process Concurrent Processes are when several tasks (or execution. processes) run at the same time on a computer. Concurrent processes are also known as concurrent INTERACTION BETWEEN PROCESSES AND OPERATING SYSTEM threads or lightweight processes. An effective computer system depends on the dynamic These processes share the computer’s resources, such interaction between processes and operating system as the CPU, memory, and devices like printers or hard (OS). In essence, a process is a program that is running, drives. The operating system manages these processes and the OS is in charge of overseeing these processes. to make sure they run smoothly together. Process Creation The Operating system handles multiple threads or When a process is created, it interacts with the called concurrent processes by having Scheduling operating system in several ways: Algorithms, Resource Allocation, Process Scheduling Process Creation Request: The process creation request and Synchronization. is sent to the operating system, which then creates a new process. These multiple processes manage the process execution Process Control Block (PCB): The operating system efficiently, which may prevent the conflicts issues of creates a PCB, which contains information about the concurrency like race conditions and deadlock. process. Process Scheduling The ability of the operating system Memory Allocation: The operating system allocates to designate an order in which any processes or tasks memory to the process can access system resources. Process Synchronization Preemptive - used when a process transitions from the During execution, a process interacts with the operating running state to the ready state or from the waiting system in several ways: state to the ready state. System Calls: The process makes system calls to the operating system to request services, such as process Non-Preemptive – used when a process terminates or creation, file I/O, and network communication. transitions from an operating state to a waiting state. Interrupts: The process generates interrupts, which are handled by the operating system. Interrupts can be Key Differences: caused by hardware events, such as keyboard presses Preemptive scheduling interrupts a running process, or disk completion. while non-preemptive scheduling does not. Context Switching: The operating system performs Preemptive scheduling ensures fairness and equity, context switching, which involves switching the CPU's while non-preemptive scheduling can lead to starvation. context from one process to another Preemptive scheduling has higher context switching overhead, while non-preemptive scheduling has lower Process Synchronization overhead. Processes interact with each other through synchronization mechanisms, such as: Different Process Scheduling Algorithms Semaphores: Semaphores are used to synchronize First-Come-First-Served access to shared resources, such as files or printers. Shortest Job First Monitors: Monitors are used to synchronize access to Priority Scheduling shared resources, and provide a way for processes to Round Robin communicate with each other. Multilevel Feedback Queue Message Passing: Message passing is used for inter- Resource Allocation Is the process of assigning system process communication, where processes send and resources like memory, I/O devices, and CPU time to receive messages to each other. different processes Process Communication Synchronization It is highly beneficial when several Processes communicate with each other through processes are operating concurrently and have various mechanisms, such as: simultaneously access to the same data or resources. Pipes: Pipes are used for inter-process communication, where processes can send and receive data through a pipe. Sockets: Sockets are used for inter-process communication over a network, where processes can Storage: file system management, disk management, send and receive data through a socket. and storage allocation Shared Memory: Shared memory is used for inter- Networking: network interface management, packet process communication, where processes can share a processing, and network protocol implementation common memory region. The kernel operates in two modes: Process Termination Kernel Mode: The kernel has full access to system When a process terminates, it interacts with the resources and executes with highest privilege. operating system in several ways: User Mode: Applications and users execute with limited Exit System Call: The process makes an exit system call privileges, relying on the kernel for system services to the operating system, which then terminates the process. Process Cleanup: The operating system performs process cleanup, which involves deallocating memory and closing open files. Parent Process Notification: The operating system notifies the parent process of the terminated process, which can then take action accordingly. Processes interact with the operating system through various mechanisms, such as system calls, interrupts, and synchronization mechanisms. The operating system provides various services to processes, such as process scheduling, memory management, and file management. Understanding these interactions is crucial for designing and implementing efficient and effective operating systems. WHAT IS KERNEL? A kernel is the core part of an operating system (OS) that manages the system's hardware resources and provides services to applications and users. It acts as an intermediary between the hardware and user-level applications, controlling access to system resources and ensuring efficient and secure operation. Kernel's primary focus is on managing the hardware components of the computer system, providing a layer of abstraction between the hardware and the user-level applications. The kernel's main concern is to efficiently and securely manage the system's resources, such as: CPU: process scheduling, context switching, and interrupt handling Memory: memory allocation, deallocation, and protection I/O Devices: input/output operations, device management, and interrupt handling