Operating System Fundamentals Introduction PDF
Document Details
Uploaded by GlamorousSpruce4081
Mansoura University
null
Dr. Mohamed Handosa
Tags
Summary
This document provides an introduction to operating system fundamentals, focusing on key concepts like hardware, application programs, and the operating system's role as a resource allocator and control program. It details the views of operating systems, the kernel, system programs, computer system organization, and computer system architecture.
Full Transcript
Information Technology Institute Operating System Fundamentals Introduction Dr. Mohamed Handosa Mansoura University [email protected] Topics 1.1. Computer System Components 1.2. Computer System Organization 1.3. Computer System Architecture 1.4....
Information Technology Institute Operating System Fundamentals Introduction Dr. Mohamed Handosa Mansoura University [email protected] Topics 1.1. Computer System Components 1.2. Computer System Organization 1.3. Computer System Architecture 1.4. Operating System Operations 1.5. Computing Environment 2 Computer System Components 3 Computer System Components Hardware Provides basic computing resources. Application programs Define the ways in which resources are used to solve user problems. Operating system (OS) Controls the hardware and coordinates its use among the application programs. Operating Systems Concepts 10th Edition 4 Operating System Views Can be viewed as a resource allocator It acts as the manager of resources and decides how to allocate them to specific programs and users. Can be viewed as a control program It controls the I/O devices and manages the execution of user programs to prevent errors and improper use of the computer. 5 Kernel and System Programs Operating system = kernel + system programs The Kernel is the one program running at all times. System programs are associated with the operating system. Application Programs are not associated with the operating system. 6 Computer System Organization Interrupts Storage Structure I/O Structure 7 Computer System Organization System bus Operating Systems Concepts 10th Edition 8 Computer System Organization Each device controller controls a specific type of devices. The CPU and the device controllers can execute in parallel. The memory controller synchronizes access to the shared memory. 9 Bootstrap Program A simple program stored in ROM or EEPROM. It is also known by the general term firmware. It runs when the computer is powered up or rebooted to: Initialize all aspects of the system (e.g. CPU, memory). Locate the kernel, load it into memory, and start its execution. 10 Interrupts Once the system is fully booted, it waits for some event to occur. An interrupt is a signal that is generated when some event occurs. A hardware-generated interrupt occurs by sending a signal to the CPU. A software-generated interrupt (or trap) occurs by executing a system call. Each computer architecture has a predefined set of interrupts. Each interrupt has a corresponding service routine (or handler). The interrupt handler must be executed when the interrupt occurs. The interrupt vector is stored in low memory and holds the addresses of the interrupt service routines (or interrupt handlers). 11 Interrupt Timeline Example Operating Systems Concepts 10th Edition 12 Storage Structure ROM cannot be changed. EEPROM can be changed but not frequently. The main memory (or RAM) is implemented as DRAM. The CPU can load instructions only from the main memory. The CPU interacts with RAM using load or store instructions. The load instruction moves a word from RAM to a CPU register. The store instruction moves the content of a CPU register to RAM. 13 Instruction Execution Cycle Fetch the instruction from memory and store it in the CPU’s instruction register. Decode the instruction, which may require fetching operands from memory and store them in some CPU internal registers. Execute the instruction, which may store the result back in memory. 14 Secondary Storage Main memory cannot store all programs and data permanently: RAM is usually too small to store all needed programs and data. RAM is a volatile storage device that loses its contents when power is off. Thus, most computer systems provide secondary storage as an extension of main memory. The main requirement for secondary storage is to be able to hold large quantities of data permanently (e.g. hard disk drive). 15 Storage Hierarchy Operating Systems Concepts 10th Edition 16 Caching Caches can be installed to improve access time or transfer rate. Caching refers to the use of high-speed memory to hold a copy of recently-accessed data assuming that it will be needed again soon. Cache coherency ensures that an update to some data in a cache is reflected immediately in other caches containing a copy of that data. Operating Systems Concepts 10th Edition 17 I/O Structure A general-purpose computer system consists of CPUs and multiple device controllers connected through a common bus. Each device controller controls a specific type of device (e.g. USB). A device controller has a local buffer storage and a set of registers. The device controller is responsible for moving the data between the peripheral devices that it controls and its local buffer storage. The operating system provides a device driver that understands the device controller and provides a uniform interface to the device. 18 I/O Operation 1. Device driver loads the registers within the device controller. 2. Device controller determines action to take based on registers. 3. Device controller transfers data between device and its local buffer. 4. Device controller generates an interrupt to inform the device driver. 5. Device driver returns control to the operating system Possibly returning the data if the operation was a read. For other operations, the device driver returns status information. This is fine for moving small amounts of data but can produce high overhead on the CPU when used for bulk data movement. 19 Direct Memory Access Device controller transfers an entire block of data directly between its local buffer and main memory, with no intervention by the CPU. Device controller generates only one interrupt per block, to inform the device driver, rather than one interrupt per byte. While the device controller is transferring a whole data block, the CPU is available to accomplish other work. 20 Modern Computer System Operating Systems Concepts 10th Edition 21 Computer System Architecture Single Processor Systems Multiprocessor Systems Clustered Systems 22 Single Processor Systems A general-purpose processor supports a complete instruction set. Therefore, it can execute user processes. A special-purpose processor supports a limited instruction set. Therefore, it cannot execute user processes. Special-purpose processors include device-specific processors, such as disk, keyboard, and graphics controllers. A single processor system has only one general-purpose processor. The use of special-purpose microprocessors is common and does not turn a single-processor system into a multiprocessor. 23 Multiprocessor Systems Also known as parallel systems or tightly-coupled systems. Two or more general-purpose processors in close communication, sharing bus, clock, memory, and peripheral devices. First appeared in servers but recently appeared on mobile devices such as smartphones and tablet computers. 24 Advantages of Multiprocessor Systems Increased throughput. More processors means more work in less time. The speed-up ratio with 𝑁 processors is less than 𝑁. Economy of scale. Sharing peripherals, mass storage, and power supplies means less cost. Increased reliability. Failure of one processor will not halt the system, but only slows it down. 25 System Reliability Increased reliability is crucial in many applications. Graceful degradation is the ability to continue providing service proportional to the level of surviving hardware. Fault tolerant systems can continue operation despite of failures. Fault tolerance requires a mechanism to allow the failure to be detected, diagnosed, and, if possible, corrected. 26 Types of Multiprocessing Asymmetric multiprocessing This scheme defines a boss-worker relationship. The boss processor schedules and allocates work to the worker processors. Worker processors look to the boss for instruction or have predefined tasks. Symmetric multiprocessing (SMP) Used by most systems. No boss-worker relationship. All processors are peers, where each processor can perform any task. 27 Symmetric Multiprocessing Architecture Operating Systems Concepts 10th Edition 28 Multicore Systems Multiple computing cores on a single chip. More efficient than multiple chips with single cores On-chip communication is faster than between-chip communication. One chip with multiple cores uses less power than multiple single-core chips. Multicore systems are multiprocessor systems. Not all multiprocessor systems are multicore systems. 29 Dual-Core Architecture Operating Systems Concepts 10th Edition 30 Clustered Systems Also known as loosely-coupled systems. Composed of two or more individual systems (or nodes). Each node may be a single processor system or a multicore system. Clustered computers share storage and are closely linked via a LAN. 31 Types of Clustering Asymmetric clustering One node is in hot-standby mode monitoring the active server. If the active server fails, the hot-standby node becomes the active server. Supports high-availability service, where a node failure does not stop service. Symmetric clustering Two or more nodes are running applications and are monitoring each other. Supports high performance computing better than multiprocessor systems. An application can run concurrently on all cluster nodes using parallelization. Parallelization divides a program into separate components to run in parallel. 32 General Structure of a clustered Systems Operating Systems Concepts 10th Edition 33 Operating System Operations Dual Mode and Multimode Operation Timer 34 Dual-Mode Operation Protects the operating system from errant users. Two modes of operation: user mode and kernel mode. Kernel mode = supervisor mode = system mode = privileged mode. User defined code must execute in user mode. Only the operating system can execute in kernel mode. A mode bit indicates the current mode: kernel (0) or user (1). All instructions that may cause harm are deigned as privileged. Privileged instructions can execute only in kernel mode (i.e. by OS). 35 Dual-Mode Operation Operating Systems Concepts 10th Edition 36 Multi-Mode Operation The concept of modes can be extended beyond two modes. A CPU that supports virtualization can have a third mode for VMM. VMM mode provide more privileges than user but fewer than kernel. VMM needs these privileges so it can create and manage virtual machines. Sometimes, different modes are used by various kernel components. VMM = Virtual Machine Manager 37 Timer The OS must prevent a user program from running too long. A timer can be set to interrupt the system after a specified period. A timer is generally implemented by a fixed-rate clock and a counter. The OS sets the counter before turning the control to a user program. Every time the clock ticks, the counter is decremented. When the counter reaches 0, an interrupt occurs and control return to the OS. The OS may treat the interrupt as a fatal error or give the program more time. Clearly, instructions to modify the content of the timer are privileged. 38 Computing Environment Traditional Computing Virtualization Mobile Computing Cloud Computing Distributed Computing Real-Time Embedded Systems 39 Traditional Computing Computer systems started as mainframe computers. Mainframes were used primarily by large organizations. Mainframe computers have evolved from batch systems to multiprogramming systems, and then time-sharing systems. Later, desktop computers were introduced and gained popularity. 40 Batch Systems Processed jobs in bulk, one job after another. Input devices were card readers and tape drives. Line printers output results and a memory dump. OS = resident monitor for automatic job sequencing. If the running job needs to wait for an I/O operation, the CPU remains idle waiting for the job. Disadvantages Low CPU utilization as the CPU is often idle. There is no direct interaction between user and system. Operating Systems Concepts 10th Edition 41 Multiprogramming Systems Several jobs are kept in main memory. The CPU is always executing a job (no idle time). If the running job needs to wait for an I/O operation, the OS picks another job to start/continue execution. This requires the OS to provide several features: I/O routines: only the OS can perform I/O. CPU scheduling: to selecting a job for execution. Memory management: to allocate memory to several jobs. Resource allocation: no job can use resources of other jobs. Increased CPU utilization but still no direct interaction. Operating Systems Concepts 10th Edition 42 Timesharing Systems An interactive computer system Provides direct communication between the user and the system. Should be responsive (i.e. response time to user input should be minimal). A time-sharing (or multitasking) system is An interactive computer system. An extension of multiprogramming systems. The CPU switches between jobs so frequently that the user can interact with each program while it is running. 43 Desktop Computers A personal computer dedicated to a single user. I/O devices: keyboards, mice, display screens, small printers. OS focuses on achieving user convenience and responsiveness. Less focus on advanced CPU utilization and protection features. Desktop computer OSs include Windows, Mac OS, UNIX, and Linux. 44 Mobile Computing Computing on handheld smartphones and tablet computers. These devices are portable, lightweight, and battery-powered. A mobile device can provide features that are either unavailable or impractical on a desktop or laptop computer, such as: GPS: to determine precise location of the device on Earth. Accelerometers: to measure linear acceleration in different directions. Gyroscopes: to detect orientation of the device with respect to the ground. Two OSs dominate mobile computing: Apple iOS and Google Android. 45 Distributed Computing Computer networks are characterized by distances between nodes: Personal-area network (PAN): wireless links over a short distance. Local-area network (LAN): links nodes within a room, building, or campus. Metropolitan-area network (MAN): connects computer nodes within a city. Wide-area network (WAN): links nodes within buildings, cities, or countries. A distributed system is a collection of physically separate, possibly heterogeneous, computer systems that are networked together. Distributed computing is the use of a distributed systems to solve single large problems. may take the form of client-server computing or peer-to-peer computing. 46 Client-Server Computing A model of distributed systems. Server systems satisfy requests of client systems. A server can be categorized as a compute-server or a file-server. File-servers allows clients to create, update, read, and delete files. Compute-servers receive client requests, execute actions, and return results. Operating Systems Concepts 10th Edition 47 Peer-to-Peer Computing Another model of distributed computing. A node must first join the peer-to-peer system. All nodes are peer, where each node may act as A server, when providing services to other nodes. A client, when requesting services from other nodes. In client-server systems, a server is a bottleneck. In P2P systems, several nodes can provide the service. Operating Systems Concepts 10th Edition 48 Virtualization Allows for creating a virtual machine that acts like a real computer. A single computer (host) may run several virtual machines (guests). The host and the guests share the hardware, but each has its own OS. The virtual machine manager runs the guest machines, manages their resource use, and protects each guest from the others. 49 Virtualization Operating Systems Concepts 10th Edition 50 Cloud Computing Delivers computing as a service and users pay based on usage. Uses virtualization to run millions of VMS on thousands of servers. A cloud service can be categorized as public, private, or hybrid cloud. A public cloud is available to anyone via the Internet. A private cloud is used only by the company owning it. A hybrid cloud includes both public and private cloud components. It may provide infrastructure, platform, or application as a service. Software as a service (SaaS) provides applications (e.g. Google Docs). Platform as a service (PaaS) provides a software stack (e.g. Google Firebase). Infrastructure as a service (IaaS) provides servers or storage over the internet 51 Cloud Computing Internet connectivity requires security like firewalls. Load balancers spread traffic across multiple applications. Operating Systems Concepts 10th Edition 52 Real-Time Embedded Systems Embedded computers are primitive with very specific tasks. Their OSs provide limited features with little or no user interface. Embedded systems almost always run real-time operating systems. A real-time system has well-defined, fixed time constraints. Processing must be done within the defined constraints, or the system fails. Examples: weapons systems, nuclear plant systems, medical imaging systems, and industrial control systems. 53 Thank You 54 Information Technology Institute Operating System Fundamentals Operating Systems Structures Dr. Mohamed Handosa Mansoura University [email protected] Topics 2.1. Operating system services 2.2. System calls 2.3. Types of system calls 2.4. System boot 2.5. Operating system structure 2 Operating System Services 3 Operating System Services (1 of 2) User interface Provide a graphical user interface (GUI) or a command-line interface (CLI). Program execution Load programs into memory and run them. A program must be able to end its execution, either normally or abnormally. I/O operations User processes cannot execute I/O operations directly. Therefore, the OS must provide processes with some means to perform I/O. 4 Operating System Services ( 2 of 2) File-system manipulation Allow user processes to read, write, create, and delete files. Communications Enable user processes to exchange information. Implemented via shared memory or message passing. Error detection Ensure correct computing by detecting errors in hardware or user programs. 5 Additional Operating System Functions Resource allocation Allocate resources to the running processes. Logging Recording which process uses how much and what kinds of resources. This record may be used for account billing or accumulating usage statistics. Protection and Security Protection ensures that all access to system resources is controlled. The security of the system from outsiders is also important. 6 System Calls 7 System Calls Provide an interface to the services provided by the OS. User processes can use system calls to call the OS services. System calls are available as functions written in C and C++. Some low-level tasks may be written using assembly-language. When a process executes a system call, the system traps to the OS. The OS provides application programmers with API to invoke services. API = Application Programming Interface 8 System Call Example Operating Systems Concepts 10th Edition 9 Passing Parameters to System Calls Method 1: Pass the parameters in CPU registers. Method 2: Store the parameters in a table in memory (block) and pass the address of the block as a parameter in a CPU register. Method 3: Push the parameters onto the stack by the process, and later pop them of the stack by the operating system. 10 Passing Parameters (Method 2) Operating Systems Concepts 10th Edition 11 Types of System Calls (1 of 2) Process control Create, load, execute, terminate, and abort. Get process attributes and set process attributes. Wait for time, wait event, signal even, allocate and free memory. File management Create file and delete file. Open, close, read, write, reposition file. Get file attributes and set file attributes. Device management Request device, release device, read, write, reposition. Get and set device attributes, logically attach or detach devices. 12 Types of System Calls (2 of 2) Information maintenance Get time/date, set time/date Get system data, set system data Get process, file, or device attributes, set process, file, or device attributes. Communications Create, delete communication connection. Send message, receive message, transfer status information Attach remote device, detach remote device. 13 System Boot 14 System Boot The process of starting a computer by loading the kernel. On most systems, the boot process proceeds as follows: 1. The bootstrap program locates the kernel. 2. The kernel is loaded into memory and started. 3. The kernel initializes hardware. 4. The root file system is mounted. 15 Operating System Structures Monolithic Approach Modules Approach Layered Approach Hybrid Approach Micro-kernel Approach 16 Monolithic Approach No structure at all. Difficult to implement and extend. Communication within the kernel is fast. Very little overhead in the system-call interface. A single, static binary file that runs in a single address space. Tightly-coupled as changes to one part can affect other parts. Several OSs use of this approach, such as UNIX, Linux, and Windows. 17 Layered Approach Divides the OS into a set of layers. Makes it easier to implement and extend the kernel. Each layer uses the functions of only the lower-level layers. Therefore, it is difficult to define the functionality of each layer. Loosely-coupled as changes in one layer do not affect the others. Traversing multiple layers to call services results in poor performance. 18 Layered Approach Bottom layer (layer 0), hardware. Highest layer (layer 𝑁), user interface. Operating Systems Concepts 10th Edition 19 Micro-kernel Approach Removes all nonessential components from the kernel and implements them as system programs. Operating Systems Concepts 10th Edition 20 Advantages of the Micro-kernel Approach It is easier to extend the OS. No need to modify of the kernel. New services are added as system programs It is easier to modify kernel if needed as it is a smaller kernel. It is easier to port the OS from one hardware design to another. It provides more security and reliability. Most services are running as user rather than kernel processes. If a service fails, the rest of the operating system remains untouched. 21 Modules Approach The kernel has a set of core components and can link in additional services via LKMs, either at boot time or during run time. Linking additional services dynamically is preferable to recompiling the kernel every time a change is made. This type of design is common in modern implementations of UNIX, such as Linux, macOS, and Solaris, as well as Windows. LKM = Loadable Kernel Module 22 Modules Approach versus Other Approaches Layered approach Similarity: each kernel section has a defined and protected interface. Difference: more flexible, because any module can call any other module. Microkernel approach Similarity: the primary module has only core functions. Difference: modules do not need to use message passing to communicate. 23 Hybrid Approach In practice, very few OSs adopt a single, strictly defined structure. Instead, they combine different structures, resulting in hybrid systems that address performance, security, and usability issues. For example, Linux is both monolithic and modular: Monolithic: to provide very efficient performance. Modular: to add new functionality dynamically to the kernel. 24 Thank You 25 Information Technology Institute Operating Systems Processes Dr. Mohamed Handosa Mansoura University [email protected] Topics 3.1. Process Content 3.2. Process States 3.3. Process Control Block 3.4. Process Scheduling 3.5. Operations on Processes 3.6. Inter-Process Communication 3.7. Client-Server Communication 2 Process Also known historically as job. A process is a program in execution. Process execution must be sequential. Process current status is represented by: The value of the program counter (PC) The contents of the processor's registers. 3 Process Content 4 Process Content Text section: executable code. Data section: global variables. Heap section: dynamically allocated variables. Stack section: temporary variables (local variables). Operating Systems Concepts 10th Edition 5 Process States 6 Process States New. The process is being created. Running. Instructions are being executed. Waiting. The process is waiting for some event to occur. Ready. The process is waiting to be assigned to a processor. Terminated. The process has finished execution. 7 Process States Operating Systems Concepts 10th Edition 8 Process Control Block 9 Process Control Block (PCB) Also know as task control block. Each process is represented in the OS by a PCB. A data structure that stores information about a process. The OS uses a PCB to store all the data needed to start (or restart) a process, along with accounting data. Operating Systems Concepts 10th Edition 10 PCB Contents Process state. (e.g. new, ready, etc.). Program counter. The address of the next instruction. CPU registers. Stores state information when an interrupt occurs. CPU-scheduling information. Scheduling parameters (e.g. priority). Memory-management information. Process location in memory. Accounting information. time limits, process numbers, and so on. I/O status information. Allocated I/O devices, open files, and so on. 11 Process Scheduling Scheduling Queues Context Switch 12 Process Scheduling CPU scheduler selects a ready process from memory for execution. Degree of multiprogramming is the number of processes in memory. Processes can be described as either I/O-bound or CPU-bound. An I/O-bound process spends more of its time doing I/O. A CPU-bound process uses more of its time doing computations. If there are more ready processes than cores, excess processes will have to wait until a core is free and can be rescheduled. 13 Scheduling Queues Ready queue contains processes waiting to execute on a CPU. Wait queue contains processes waiting for a certain event to occur. Operating Systems Concepts 10th Edition 14 Queuing Diagram New Process Terminated Operating Systems Concepts 10th Edition 15 Context Switch The context of a process is represented in its PCB. Context includes value of CPU registers, location in memory, etc. Context switch occurs when the CPU switches to another process. Context switch Saves the current context of the current process in its PCB. Loads the saved context of the other process scheduled to run. Context switch time is pure overhead (no useful work while switching) 16 Context Switch Example Operating Systems Concepts 10th Edition 17 Operations on Processes Process Creation Process Termination 18 Process Creation A process may create several new processes. The creating process is called a parent process. The new processes are called the children of that process. Each of these new processes may, in turn, create other processes. 19 Parent-Child Process Relationship In terms of execution, there are two possibilities: The parent continues to execute concurrently with its children. The parent waits until some or all of its children have terminated. In terms of address-space, there are two possibilities: The child process has a new program loaded into it. The child process is a duplicate of the parent process. In terms of resource sharing, there are three possibilities: Parent and child share no resources. Parent and children share all resources. Children share a subset of parent's resources. 20 Process Termination A process terminates when it finishes execution. It asks the OS to delete it by using the exit system call. Once a process is terminated, the OS deallocates all of its resource. A child process may output data to its parent using wait system call. 21 Child Process Termination A parent process may terminate its child using the abort system call. This may happen in the following cases: Task assigned to the child is no longer required. Parent is exiting in a cascading termination system. Child has exceeded the usage of resources allocated to it. Some systems do not allow a child to exist if its parent terminates. Cascading termination means that if a process terminates, all of its children must be terminated as well. 22 Inter-Process Communication Shared Memory Message Passing 23 Cooperating Processes An independent process does not share data with other processes. A cooperating process shares data with other executing processes. Reasons to support process cooperation: Information sharing. (e.g. copying and pasting). Computation speedup. Break a task into sub-tasks that run in parallel. Modularity. Dividing the functions of a system into separate processes. Cooperating processes need an IPC mechanism to exchange data. Two fundamental mechanisms: shared memory and message passing. IPC = Inter-Process Communication 24 Shared Memory versus Message Passing Operating Systems Concepts 10th Edition 25 Shared Memory Normally, the OS restricts a process to its own memory. Communicating processes agree to remove this restriction. A process creates a shared-memory region in its address space. Other processes can then attach that region to their address space. They exchange information by reading and writing data in that region. Communicating processes are responsible for: Determining the form of the data and the location of shared data. Ensuring that they are not writing to the same location simultaneously. 26 Message Passing Allow processes to communicate without sharing memory. Useful in a distributed environment and managed by the OS. A message-passing facility should provide at least two operations: 𝑠𝑒𝑛𝑑(𝑚𝑒𝑠𝑠𝑎𝑔𝑒) 𝑟𝑒𝑐𝑒𝑖𝑣𝑒(𝑚𝑒𝑠𝑠𝑎𝑔𝑒) 27 Message Size Fixed-sized messages Easier to implement at the system-level Place a restriction on the application programmer. Variable-sized massages More complex to implement at the system-level Makes application programming easier and more flexible. 28 Communication Link The physical implementation of a link can take several forms. E.g. hardware bus, network, etc. The logical implementation of a link can take several forms as well. Direct or indirect communication Synchronous or asynchronous communication Automatic or explicit buffering 29 Direct Communication Direct communication may use symmetric or asymmetric addressing. In symmetric addressing, both sender and receiver name each other. 𝑠𝑒𝑛𝑑(𝑃, 𝑚𝑒𝑠𝑠𝑎𝑔𝑒): Send a message to process 𝑃. 𝑟𝑒𝑐𝑒𝑖𝑣𝑒(𝑄, 𝑚𝑒𝑠𝑠𝑎𝑔𝑒): Receive a message from process 𝑄. In asymmetric addressing, only the sender names the receiver process. 𝑠𝑒𝑛𝑑(𝑃, 𝑚𝑒𝑠𝑠𝑎𝑔𝑒): Send a message to process 𝑃. 𝑟𝑒𝑐𝑒𝑖𝑣𝑒(𝑖𝑑, 𝑚𝑒𝑠𝑠𝑎𝑔𝑒): Receive a message from any process. A direct communication link has the following properties: A link is established automatically. A link is associated with exactly two processes. Between each pair of processes, there exists exactly one link. 30 Indirect Communication Messages are sent to and received from mailboxes (or ports). Mailbox user is a process that can send messages to a mailbox. Mailbox owner is a process that can receive messages from mailbox. In indirect communication, 𝑠𝑒𝑛𝑑() and 𝑟𝑒𝑐𝑒𝑖𝑣𝑒() are defined as: 𝑠𝑒𝑛𝑑(𝐴, 𝑚𝑒𝑠𝑠𝑎𝑔𝑒): Send a message to mailbox 𝐴. 𝑟𝑒𝑐𝑒𝑖𝑣𝑒(𝐴, 𝑚𝑒𝑠𝑠𝑎𝑔𝑒): Receive a message from mailbox 𝐴. An indirect communication link has the following properties: A link is established by creating and sharing a mailbox. A link may be associated with more than two processes. Each pair of processes may have several links (i.e. mailboxes). 31 Synchronization Blocking (synchronous) send. The sending process is blocked until the message is received by the receiving process or by the mailbox. Non-blocking (asynchronous) send. The sending process sends the message and resumes operation, immediately. Blocking (synchronous) receive. The receiver blocks until a message is available. It resumes operation only after receiving the message. Non-blocking (asynchronous) receive. The receiver retrieves either a valid message or a null and resumes operation, immediately. 32 Buffering Zero capacity (no buffering) The link cannot have any messages waiting in it. The sender must block until the recipient receives the message. Bounded capacity (automatic buffering) The queue has finite length 𝑛; thus, at most n messages can reside in it. If the queue is full, the sender must block until there is an available space. Unbounded capacity (automatic buffering) The queue's length is infinite; thus, any number of messages can wait in it. The sender never blocks. 33 Client-Server Communication Sockets Remote Procedure Calls 34 Sockets A socket is an endpoint for communication. Identified by an IP address concatenated with a port number. A pair of processes communicating employs a pair of sockets. The server waits for client requests by listening to a specified port. If request received, server accepts a connection from client socket. 35 Sockets Example 36 Remote Procedure Call (RPC) Allows a client to invoke a procedure on a remote host. An RPC daemon listens to a port on the remote system. A stub on the client-side hides the communication details. A stub allows for calling the remote procedure like a local one. RPCs may fail or execute multiple times due to network errors. Messages exchanged in RPC communication are well structured. A message contains the function identifier and passed parameters. The function is executed, and any output is returned as a message. 37 Thank You 38 Information Technology Institute Operating Systems Threads Dr. Mohamed Handosa Mansoura University [email protected] Topics 4.1. Benefits 4.2. Multi-core Programming Challenges 4.3. Types of Parallelism 4.4. Multithreading Models 4.5. Implicit Threading 4.6. Thread Cancellation 2 Threads A thread is a single sequence of execution. A thread is the basic unit of CPU utilization. A traditional process has a single thread of control. A single-threaded process can perform one task at a time. A multi-threaded process can perform multiple tasks at a time. A thread comprises a thread ID, a PC, a register set, and a stack. Process threads share code section, data section, and resources. 3 Single-Threaded versus Multi-Threaded 4 Benefits 5 Benefits of Multithreaded Programs Responsiveness. A blocked thread doesn’t block the process. Resource sharing. Threads share the resources of the process. Economy. Thread creation has less overhead than process creation. Scalability. Threads can run in parallel on a multiprocessor machine. 6 Multi-core Programming Challenges 7 Multi-core Programming Challenges Identifying tasks. divide program into tasks that can run in parallel. Balance. Ensure that these tasks perform equal work of equal value. Data splitting. Divide data between tasks that run on separate cores. Data dependency. Examine data to find dependencies between tasks. Testing and debugging. Testing a program with many execution paths. 8 Types of Parallelism Data Parallelism Task Parallelism 9 Data Parallelism Distribution of data across multiple cores. 10 Task Parallelism Distribution of tasks across multiple cores. 11 Multithreading Models Many-to-one model Many-to-many model One-to-one model Two-level model 12 User Threads versus Kernel Threads Support for threads may be provided either At the user level, for user threads. By the operating system, for kernel threads. User threads are managed without OS support. Kernel threads are supported and managed directly by the OS. Most modern OSs support kernel threads (e.g., Windows, Linux). 13 Many-to-One Model Only one user thread can access the kernel at a time. Threads cannot run in parallel on multi-core systems. The entire process will block if a thread makes a blocking system call. Thread management is done in user space by a thread library not OS. 14 One-to-One Model Multiple threads can access the kernel at a given time. Multiple threads can run in parallel on multi-core systems. When a thread makes a blocking system call, another thread can run. OS creates a kernel thread per user thread (can burden performance). 15 Many-to-Many Model Multiple threads can access the kernel at a given time. Multiple threads can run in parallel on multi-core systems. When a thread makes a blocking system call, another thread can run. OS creates a set of kernel threads (less than or equal to user threads). 16 Advantages of Many-to-Many Model Does not suffer from the drawback of the many-to-one model. Multiple user threads can run in parallel on a multicore system. Does not suffer from the drawback of the one-to-one model. Increasing the number of user threads does not affect system performance. 17 Two-Level Model Most operating systems use the one-to-one model. In practice, it is difficult to implement the many-to-many model. No need to limit the number of kernel threads on modern multicore systems. Modern concurrency libraries use the many-to-many model. Developers identify tasks and the library maps them to kernel threads. 18 Implicit Threading 19 Explicit Threading The server creates a thread for each request. The thread is discarded once it has completed its work. Each new request requires creating a new thread (i.e., overhead). The number of threads that can be created has no upper bound. Too many threads that work concurrently can exhaust system resources. 20 Implicit Threading Helps developing modern applications with hundreds of threads. Programmers identify tasks (not threads) that can run in parallel. The programmer writes a task as a function. Run-time libraries (not programmers) create and manage threads. Each task is mapped to a separate thread using the many-to-many model. Two implicit threading approaches: Thread pool. works well for asynchronous (independent) tasks. Fork-join. works well for tasks that are synchronous (cooperating). 21 Thread Pool At startup, the server creates a set of threads in a pool. When the server receives a request, it submits it to the pool. If the pool has a thread, the request is served immediately. If the pool is empty, the task is queued until a thread becomes free. Once a thread completes serving the request, it returns to the pool. Thread pools offer two benefits: Serving a request with an existing thread is faster than creating a thread. Limiting the number of threads helps avoid exhausting system resources. A dynamic thread pool can adjust the number of threads in the pool. 22 Fork-Join In traditional fork-join, the main parent thread does the following: 1. Create (i.e., forks) one or more child threads. 2. Wait for the created threads (children) to terminate. 3. Join the threads, at which point it can retrieve and combine their results. Implicit fork-join is a synchronous version of the thread pool. Threads are not constructed but parallel tasks are designated. A library manages the creation of threads and assigns tasks to those threads. 23 Thread Cancellation 24 Thread Cancellation Refers to terminating a thread before it has completed. Cancellation of a target thread can be asynchronous or deferred. Asynchronous cancellation One thread immediately terminates the target thread. Deferred cancellation The target thread periodically checks whether it should terminate. This allows the target thread to terminate itself in an orderly fashion. 25 Thank You 26 Information Technology Institute Operating System Process Synchronization Dr. Mohamed Handosa Mansoura University [email protected] Topics Producer-Consumer Problem Critical Section Problem Hardware Synchronization Mutex Locks Semaphores Deadlocks 2 Data Inconsistency Processes can execute concurrently. The CPU scheduler switches rapidly between processes. A cooperating process can share data with other processes. Concurrent access to shared data may result in data inconsistency. 3 Producer-Consumer Problem 4 Producer-Consumer Problem Also known as the bounded-buffer problem. Two processes, the producer and the consumer Both share a common, fixed-size buffer used as a queue. The producer's job is to generate data and put it into the buffer. At the same time, the consumer removes the data from the buffer. The problem is to make sure that The producer won't try to add data into the buffer if it's full. The consumer won't try to remove data from an empty buffer. 5 Proposed Solution – Shared Data class Item {... } int in = 0; int out = 0; int count = 0; const int bufferSize = 10 Item[] buffer = new Item[bufferSize]; 6 Proposed Solution – Producer Item nextProduced while (true) { // produce an item in nextProduced while (count == bufferSize); // do nothing buffer[in] = nextProduced; in = (in + 1) % bufferSize; count++; } 7 Proposed Solution – Consumer Item nextConsumed; while (true) { while (count == 0); // do nothing nextConsumed = buffer[out]; out = (out + 1) % bufferSize; count--; // consume the item in nextConsumed } 8 Instruction Execution (Updating count) count++ count-- register1 = count register2 = count register1 = register1 + 1 register2 = register2 – 1 count = register1 count = register2 9 Race Condition 𝑇0: producer 𝑟𝑒𝑔𝑖𝑠𝑡𝑒𝑟1 = 𝑐𝑜𝑢𝑛𝑡 (register1 = 5) 𝑇1: producer 𝑟𝑒𝑔𝑖𝑠𝑡𝑒𝑟1 = 𝑟𝑒𝑔𝑖𝑠𝑡𝑒𝑟1 + 1 (register1 = 6) 𝑇2: consumer 𝑟𝑒𝑔𝑖𝑠𝑡𝑒𝑟2 = 𝑐𝑜𝑢𝑛𝑡 (register2 = 5) 𝑇3: consumer 𝑟𝑒𝑔𝑖𝑠𝑡𝑒𝑟2 = 𝑟𝑒𝑔𝑖𝑠𝑡𝑒𝑟2 − 1 (register2 = 4) 𝑇4: producer 𝑐𝑜𝑢𝑛𝑡 = 𝑟𝑒𝑔𝑖𝑠𝑡𝑒𝑟1 (count = 6) 𝑇5: consumer 𝑐𝑜𝑢𝑛𝑡 = 𝑟𝑒𝑔𝑖𝑠𝑡𝑒𝑟2 (count = 4) 10 Critical Section Problem 11 Critical Section A critical section is a segment of code, in which the process may be accessing and updating data that is shared with other processes. When one process is executing in its critical section, no other process should be allowed to execute in its critical section. An entry section is a segment of code, in which the process requests permission to enter its critical section. An exit section is a segment of code that follows the critical section. 12 Cooperating Process Implementation Operating Systems Concepts 10th Edition 13 Solution Requirements Mutual exclusion. If a process 𝑃𝑖 is executing in its critical section, then no other processes can be executing in their critical sections. Progress. If no process is executing in its critical section and some processes wish to enter their critical sections, then the selection of the next process cannot be postponed indefinitely. Bounded waiting. There exists a bound on the number of times that other processes are allowed to enter their critical sections before the request of a given process is granted. 14 Hardware Synchronization Test-and-Set Compare-and-Swap 15 Hardware Synchronization Many modern computer systems provide hardware instructions to: Test and modify the content of a word Swap the contents of two different words These instructions are atomic (execute as one uninterruptible unit). These instructions can be used to solve the critical-section problem. 16 The Test-and-Set Instruction bool TestAndSet(bool ref value) { bool oldValue = value; value = true; return oldValue; } 17 Mutual Exclusion using Test-and-Set do { while (TestAndSet(ref lock)); // do nothing // critical section lock = false; // remainder section } while (true); 18 The Compare-and-Swap Instruction int CompareAndSwap(int ref value, int expected, int newValue) { int temp = value; if (value == expected) value = newValue; return temp; } 19 Mutual Exclusion using Compare-and-Swap do { while (CompareAndSwap(ref lock, 0, 1) != 0) ; // do nothing // critical section lock = 0; // remainder section } while (true); 20 Mutex Locks 21 Software-Based Solutions Hardware-based solutions are complicated. Generally inaccessible to application programmers. Instead, OS designers build higher-level software tools. The simplest of these tools is mutex lock (mutual exclusion). 22 Software-Based Solutions Mutex lock is a Boolean variable. Accessed only through two standard atomic operations: Aquire() a process must call it before entering the critical section. Release() a process must call it right after exiting the critical section. 23 Mutual Exclusion using Mutex Locks 24 Mutex Lock Implementation class Mutex { private bool available; public Mutex() { available = true; } public void Acquire() // must execute atomically { while (!available); // busy wait available = false; } public void Release() // must execute atomically { available = true; } } 25 Semaphores 26 Semaphore A semaphore is an integer variable. Accessed only through two standard atomic operations: Wait() a process must call it before entering the critical section. Signal() a process must call it right after exiting the critical section. 27 Semaphore Implementation class Semaphore { private int value; public Semaphore(int initial) { value = initial; } public void Wait() { while (value