Chapter-3-Processes (1).pptx
Document Details
Uploaded by Deleted User
Tags
Full Transcript
CHAPTER 3 PROCESSES By: Yakie Galaura Ampig and Leizhelle Yvonne Brasileño BSCpE 3B CHAPTER OBJECTIVES To introduce the notion of a process—a program in execution, which forms the basis...
CHAPTER 3 PROCESSES By: Yakie Galaura Ampig and Leizhelle Yvonne Brasileño BSCpE 3B CHAPTER OBJECTIVES To introduce the notion of a process—a program in execution, which forms the basis of all computation. 2 CHAPTER OBJECTIVES To describe the various features of processes, including scheduling, creation, and termination. 3 CHAPTER OBJECTIVES To explore Interprocess Communication using shared memory and message passing. 4 CHAPTER OBJECTIVES To describe communication in client-server systems. 5 PROCESS CONCEPT 3.1 What do you call all the CPU activities? 7 A batch system executes jobs, whereas a time-shared system has user programs or tasks. 8 Even on a single-user system, a user may be able to run several programs simultaneously: a word processor, a Web browser, and an e-mail package. 9 THE PROCESS A process is a program in execution. 11 Text Section A process is more than the program code. 12 Tex t Se It also includes the current cti on activity, as represented by the value of the program counter and the contents of the processor’s registers. 13 Th e Pr oc ess A process generally also includes the process stack, which contains temporary data, and a data section, which contains global variables. 14 Th e Pr oc ess It may also include a heap, which is a memory that is dynamically allocated during process run time. 15 Th e Pr oc ess Figure 3.1 Process in Th e Pr oc Two common techniques for ess loading executable files are double-clicking an icon representing the executable file and entering the name of the executable file on the command line. 17 Th e Pr oc The Java ess programming environment provides a good example. 18 Th e Pr oc ess In most circumstances, an executable Java program is executed within the Java Virtual Machine (JVM). 19 Th e Pr oc ess The JVM executes as a process that interprets the loaded Java code and takes actions on behalf of that code. 20 Th e Pr oc ess For example, to run the compiled Java program Program.class, we would enter java Program. 21 PROCESS STATE 22 As a process executes, it changes state. 23 The state of a process is defined in part by the current activity of that process. 24 A process may be in one of the following states: New Running Waiting Ready Terminated 25 Figure Diagram of Process 3.2 State 26 PROCESS CONTROL BLOCK 27 Each process is represented in the operating system by a process control block (PCB) — also called a Task Control Block. 28 Figure 3.3 PCB Pro ces Process State s Co ntr ol Blo ck The state may be new, ready, running, waiting, halted, and so on. 30 Pro ces Process State s Co ntr ol Blo ck The state may be new, ready, running, waiting, halted, and so on. 31 Pro Program Counter ces s Co ntr ol Blo ck The counter indicates the address of the next instruction to be executed for this process. 32 Pro ces s Co ntr ol Blo ck They include accumulators, index registers, stack pointers, and general-purpose registers, plus any condition- code information. 33 Pro CPU-scheduling ces s Co ntr ol Information Blo ck This information includes a process priority, pointers to scheduling queues, and any other scheduling parameters. 34 Memory-management Pro ces s Co ntr ol Information Blo ck May include such items as the value of the base and limit registers and the page tables, or the segment tables, depending on the memory system used by the operating 35 Figure 3.4 Diagram Showing CPU Switch from Process to Pro Accounting Information ces s Co ntr ol Blo ck This information includes the amount of CPU and real- time used, time limits, account numbers, job or process numbers, and so on. 37 Pro I/O Status Information ces s Co ntr ol Blo ck This information includes the list of I/O devices allocated to the process, a list of open files, and so on. 38 THREADS 39 The process model discussed so far has implied that a process is a program that performs a single thread of execution. 40 For example, when a process is running a word- processor program, a single thread of instructions is being executed. 41 This single thread of control allows the process to perform only one task at a time. 42 For example, the user cannot simultaneously type in characters and run the spell checker within the same process. 43 Most modern operating systems have extended the process concept to allow a process to have multiple threads of execution and thus to perform more than one task at a time. 44 PROCESS SCHEDULING 3.2 The objective of multi- programming is to have some process running at all times, to maximize CPU utilization. 46 The objective of time sharing is to switch the CPU among processes so frequently that users can interact with each program while it is running. 47 To meet these objectives, the process scheduler selects an available process for program execution on the CPU. 48 SCHEDULING QUEUES 49 As processes enter the system, they are put into a job queue, which consists of all processes in the system. 50 The processes that are residing in the main memory and are ready and waiting to execute are kept on a list called the ready queue. 51 The process therefore may have to wait for the disk. The list of processes waiting for a particular I/O device is called a device queue. 52 Figure 3.5 The Ready Queue and Various I/O Device Queues A common representation of process scheduling is a queueing diagram. 54 Figure 3.6 Queueing-diagram Representation of Process Two types of queues are present: the ready queue and a set of device queues. 56 The circles represent the resources that serve the queues, and the arrows indicate the flow of processes in the 57 Once the process is allocated the CPU and is executing, one of several events could occur: 58 The process could issue an I/O request and then be placed in an I/O queue. 59 The process could create a new child process and wait for the child’s termination. 60 The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in the ready queue. 61 SCHEDULERS 62 A process migrates among the various scheduling queues throughout its lifetime. 63 The operating system must select, for scheduling purposes, processes from these queues in some fashion. The selection process is carried out by the appropriate scheduler. 64 The long-term scheduler, or job scheduler, selects processes from this pool and loads them into memory for execution. 65 The short-term scheduler, or CPU scheduler, selects from among the processes that are ready to execute and allocates the CPU to one of them. 66 The primary distinction between these two schedulers lies in the frequency of execution. The short-term scheduler must select a new process for the CPU frequently. 67 The long-term scheduler executes much less frequently; minutes may separate the creation of one new process and the next. The long-term scheduler controls the degree of multiprogramming. 68 On some systems, the long-term scheduler may be absent or minimal. 69 The key idea behind a medium-term scheduler is: Figure 3.7 Addition Of Medium-term Scheduling To The Queueing Diagram That sometimes it can be advantageous to remove a process from memory (and from active contention for the CPU) and thus reduce the degree of multiprogramming. 71 Swapping The process is swapped out, and is later swapped in, by the medium-term scheduler. 72 Swapping may be necessary to improve the process mix or because a change in memory requirements has overcommitted available memory, requiring memory to be freed up. 73 CONTEXT SWITCH 74 The context is represented in the PCB of the process. It includes the value of the CPU registers, the process state, and memory-management information. 75 Generically, we perform a state save of the current state of the CPU, be it in kernel or user mode, and then a state restore to resume operations. 76 Switching the CPU to another process requires performing a state save of the current process and a state restore of a different process. This task is known as a context switch. 77 A typical speed is a few milliseconds. Context-switch times are highly dependent on hardware support. For instance, some processors (such as the Sun UltraSPARC) provide multiple sets of registers. 78 OPERATIONS ON PROCESSES 3.3 The processes in most systems can execute concurrently, and they may be created and deleted dynamically. Thus, these systems must provide a mechanism for process creation and termination. 80 PROCESS CREATION 81 During the course of execution, a process may create several new processes. 82 Creating Process Is called a parent process 83 New Processes Are called the children of that process. 84 Each of these new processes may in turn create other processes, forming a tree of processes. 85 Most operating systems (including UNIX, Linux, and Windows) identify processes according to a unique process identifier (or pid), which is typically an integer number. 86 Process Identifier (pid) Provides a unique value for each process in the system, and it can be used as an index to access various attributes of a process within the kernel. 87 Typical process tree for the Linux operating system, showing the name of each process and its pid. 88 Figure 3.8 A Tree of Processes on a Typical Linux System After a computer starts up, the `init` process begins running and can create other processes, like web servers or SSH servers. 90 For instance, `kthreadd` helps create kernel-related processes like `khelper`, while `sshd` manages SSH connections. 91 You can see all these processes by using the `ps` command in UNIX and Linux. For example, running `ps-el` gives a detailed list of processes running on the system. 92 Processes need resources like CPU and memory to work. They can get these resources directly from the operating system or from their parent process. 93 If resources are limited, the parent might have to divide them among its children or share them, which prevents the system from being overwhelmed by too many processes. 94 Figure 3.9 Creating a separate process using the UNIX fork() system Figure 3.10 Process creation using the fork() system call Figure 3.11 Creating a separate process using the Windows API PROCESS TERMINATION 99 A process terminates when it finishes executing its final statement and asks the operating system to delete it by using the exit() system call. 100 At that point, the process may return a status value (typically an integer) to its parent process (via the wait() system call). 101 All the resources of the process—including physical and virtual memory, open files, and I/O buffers—are deallocated by the operating system. 102 Termination can occur in other circumstances as well. A process can cause the termination of another process via an appropriate system call (for example, TerminateProcess() in Windows). 103 A parent may terminate the execution of one of its children for a variety of reasons, such as these: 104 1. The child has exceeded its usage of some of the resources that it has been allocated. 2. The task assigned to the child is no longer required. 3. The parent is exiting, and the operating system does not allow a child to continue if its parent terminates Cascading Termination A phenomenon that is normally initiated by the operating system 108 To illustrate process execution and termination, consider that, in Linux and UNIX systems, we can terminate a process by using the exit() system call, providing an exit status as a parameter: This system call also returns the process identifier of the terminated child so that the parent can tell which of its children has terminated: Zombie Process A process that has terminated, but whose parent has not yet called wait(). 111 All processes transition to this state when they terminate, but generally they exist as zombies only briefly. Orphans Happens to the child processes if a parent did not invoke wait() and instead terminated. 113 INTERPROCESS COMMUNICATION 3.4 Independent Processes 115 Cooperating Processes 116 A Process is Independent 117 It cannot affect or cannot be affected by other processes. 118 It also does not share data with any other processes. 119 A Process is Cooperating 120 It can affect or be affected by the other processes. 121 It also shares data with other processes. 122 Reasons for Providing an Environment that Allows Process Cooperation Information Sharing Computation Speedup Modularity Convenience 123 Cooperating processes require an Interprocess Communication (IPC) mechanism that will allow them to exchange data and information. 124 Two Fundamental Models of IPC: Shared Memory Message Passing 125 Shared Memory A region of memory that is shared by cooperating processes is established. 126 Processes can then exchange information by reading and writing data to the shared region. 127 Can be faster than message passing, since message-passing systems are typically implemented using system calls. 128 Message Passing Communication takes place by means of messages exchanged between the cooperating processes. 129 It is useful for exchanging smaller amounts of data, because no conflicts need be avoided. 130 Message passing is also easier to implement in a distributed system than shared memory. 131 Message Passing Shared Memory Figure 3.12 Communications SHARED-MEMORY SYSTEMS Shared-memory systems are a method of IPC that allows multiple processes to communicate by accessing a common region of memory. 134 This approach facilitates high-speed data exchange between processes by eliminating the need for data to be copied between different process memory spaces. 135 Operating System Role Op era tin g Sy ste m The operating system's Rol e role in shared-memory systems is minimal once the shared region is established. 137 Op era tin g Sy ste m Rol It does not control the e data format or the specific locations within the memory where data is stored. 138 Op era tin g Sy The responsibility for ste m Rol e managing and synchronizing access to the shared memory lies with the processes themselves. 139 Producer–consumer Problem A producer process produces information that is consumed by a consumer process. 141 Figure The Producer Process Using Shared 3.13 Memory Figure The Consumer Process Using Shared 3.14 Memory One solution to the producer– consumer problem uses shared memory. 144 To allow producer and consumer processes to run concurrently, we must have available a buffer of items that can be filled by the producer and emptied by the consumer. 145 Two Types of Buffers: Unbounded Buffer Bounded Buffer 146 Unbounded Buffer Places no practical limit on the size of the buffer. 147 Bounded Buffer Assumes a fixed buffer size. 148 MESSAGE-PASSING SYSTEMS Message passing provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space. 150 It is particularly useful in a distributed environment, where the communicating processes may reside on different computers connected by a network. 151 Two Operations provided: Send (message) Receive (message) 152 Communication Links: Direct or Indirect Communication Synchronous or Asynchronous Communication Automatic or Explicit Buffering 153 Naming It determines how processes identify each other to establish communication. 155 Two Methods of Naming: Direct Communication Indirect Communication 156 Direct Communication each process that wants to communicate must explicitly name the recipient or sender of the communication. 157 Two Ways in Addressing: Symmetry Assymmetry 158 Symmetry Scheme Send (P, message) — Send a message to process P. Receive (Q, message) — Receive a message from process Q. 159 Asymmetry Scheme Send (P, message) — Send a message to process P. Receive (id, message) — Receive a message from any process. 160 The disadvantage in both of these schemes (symmetric and asymmetric) is the limited modularity of the resulting process definitions. 161 Indirect Communication The messages are sent to and received from mailboxes, or ports. 162 Mailbox Can be viewed abstractly as an object into which messages can be placed by processes and from which messages can be removed. 163 A process can communicate with another process via a number of different mailboxes, but two processes can communicate only if they have a shared 164 The send() and receive() primitives are defined as follows: Send (A, message) — Send a message to mailbox A. Receive (A, message) — Receive a message from mailbox A. 165 A mailbox may be owned either by a process or by the operating system. 166 Mailbox (Owned by a Process) Then we distinguish between the owner and the user. 167 Mailbox (Owned by an OS) Has an existence of its own. 168 Synchronization Communication between processes takes place through calls to send() and receive() primitives. 170 Message passing may be either: Blocking (Synchronous) Nonblocking (Asynchronous) 171 Blocking Send The sending process is blocked until the message is received by the receiving process or by the mailbox. 172 Nonblocking Send The sending process sends the message and resumes operation. 173 Blocking Receive The receiver blocks until a message is available. 174 Nonblocking Receive The receiver retrieves either a valid message or a null. 175 Buffering Queues can be implemented in three ways: Zero Capacity Bounded Capacity Unbounded Capacity 177 Zero Capacity The queue has a maximum length of zero; thus, the link cannot have any messages waiting in it. 178 Bounded Capacity The queue has finite length n; thus, at most n messages can reside in it. 179 Unbounded Capacity The queue’s length is potentially infinite; thus, any number of messages can wait in it. 180 EXAMPLES OF IPC SYSTEMS 3.5 We will explore three different IPC Systems: POSIX (Portable Operating System Interface) Mach Windows 182 POSIX SHARED MEMORY Several IPC mechanisms are available for POSIX systems, including shared memory and message passing. 184 Among the various IPC mechanisms available in POSIX systems, shared memory is one of the most efficient methods. 185 POSIX shared memory is organized using memory- mapped files, which associate the region of shared memory with a file. 186 Steps to Create and Use POSIX Shared Memory: 187 1. Creating a Shared- Memory Object A shared-memory object is created using the 'shm_open()' system call. 188 Syntax 189 2. Setting the Size of the Shared-Memory Object Once the shared-memory object is created, its size is set using the ‘ftruncate()’ function. 190 Syntax 191 3. Mapping the Shared Memory The ‘mmap()’ function maps the shared-memory object to the address space of the process. 192 Syntax 193 MACH Mach is a pioneering microkernel operating system that serves as the foundation for many modern systems, including macOS. 195 The Mach kernel supports the creation and destruction of multiple tasks, which are similar to processes but have multiple threads of control and fewer associated resources. 196 Messages Carries out most Communication in Mach – including all intertask information. 197 Ports The mailboxes where messages are sent to and received from. 198 Mailbox Set Is a collection of mailboxes, as declared by the task, which can be grouped together and treated as one mailbox for the purposes of the task. 199 Message Passing in Mach 200 1. Message Ports Ports are the fundamental communication endpoints in Mach. 201 When a task is created, it automatically receives two special ports: the Kernel port and the Notify port. 202 Kernel Port Is used by the kernel to communicate with the task. 203 Notify Port Is used to send notifications about events. 204 2. Message Structures A message in Mach consists of a fixed-length header and a variable-length data section. 205 3. Message Operations Sending Messages: The ‘msg_send()’ system call sends a message to a specified port. 206 3. Message Operations Receiving Messages: The ‘msg_receive()’ system call retrieves a message from aport. 207 4. Port Allocation and Management The ‘port_allocate()’ system call is used to create new ports. 208 If the mailbox is full, the sending thread has four options: 1. Wait indefinitely until there is room in the mailbox. 2. Wait at most n milliseconds. 3. Do not wait at all but rather return immediately. 209 If the mailbox is full, the sending thread has four options: 4. Temporarily cache a message. Here, a message is given to the operating system to keep, even though the mailbox to which that message is being sent is full. 210 Essentially, Mach maps the address space containing the sender’s message into the receiver’s address space. 211 WINDOWS Windows operating system is an example of modern design that employs modularity to increase functionality and decrease the time needed to 213 Subsystems Multiple operating environments. 214 Advanced Local Procedure Call (ALPC) The message-passing facility in Windows. 215 It is used for communication between two processes on the same machine. 216 Windows uses two types of ports: Connection Ports Communication Ports 217 Connection Ports These are published by server processes and are accessible to all processes on the system. 218 Communication Ports Once a connection is established, a pair of private communication ports is created to manage the bidirectional flow of messages between the client and the server. 219 When an ALPC channel is created, one of three message-passing techniques is chosen: 220 1. For small messages (up to 256 bytes), the port’s message queue is used as intermediate storage, and the messages are copied from one process to the 221 2. Larger messages must be passed through a section object, which is a region of shared memory associated with the channel. 222 3. When the amount of data is too large to fit into a section object, an API is available that allows server processes to read and write directly into the address space of a client. 223 Figure Advanced local procedure calls in 3.19 Windows COMMUNICATION IN CLIENT-SERVER SYSTEMS 3.6 Three Other Strategies for Communication in Client– server Systems 1. Sockets 2. Remote Procedure Calls (RPCs) 3. Pipes 226 SOCKETS is defined as an endpoint for communication. 228 A socket is identified by an IP address concatenated with a port number. In general, sockets use a client–server architecture. 229 Server Processes The server waits for client requests by listening on a designated port, typically associated with a specific service. 230 Client Processes The client initiates a connection by contacting the server's IP address and port. 231 Figure Communication using sockets 3.20 Java provides three different types of sockets: 1. Connection-oriented (TCP) sockets 2. Connectionless (UDP) sockets 3. MulticastSocket class 233 Connection-oriented (TCP) sockets Are implemented with the Socket class. 234 Connectionless (UDP) sockets Use the DatagramSocket class. 235 MulticastSocket class Is a subclass of the DatagramSocket class. 236 Figure Date server 3.21 Loopback Is a special IP address - the IP address is 127.0.0.1. 239 REMOTE PROCEDURE CALLS One of the most common forms of remote service is the RPC paradigm. 241 The RPC was designed as a way to abstract the procedure- call mechanism for use between systems with network connections. 242 How RPCs Work 243 1. Client-Side Stub When a client makes an RPC call, the client-side stub marshals the parameters into a network-transmittable format. 244 2. Communication The marshalled data is sent to the server through a message-passing mechanism. 245 3. Server-Side Stub After executing the requested procedure, the server-side stub marshals the return values into a message and sends it back to the client. 246 Figure Date Client 3.22 RPC Semantics: At Most Once vs. Exactly Once 249 At Most Once The server processes each message only once, even if it receives multiple copies. 250 Exactly Once In addition to the "At Most Once" guarantee, the server acknowledges the successful receipt and execution of an RPC call. 251 Figure 3.22 Execution of a remote procedure call (RPC) Binding Client and Server Ports 253 Fixed Port Binding The port number is hardcoded into the client and server programs at compile time. 254 Dynamic Binding (Rendezvous Mechanism) A more flexible approach involves a rendezvous daemon that dynamically provides the port number. 255 PIPES A pipe acts as a conduit allowing two processes to communicate. 257 Pipes were one of the first IPC mechanisms in early UNIX systems. 258 A pipe acts as a conduit allowing two processes to communicate. 259 Ordinary Pipes An ordinary pipe enables unidirectional data flow between a producer process, which writes data to the pipe, and a consumer process, which reads the data from 261 An ordinary pipe enables unidirectional data flow between a producer process, which writes data to the pipe, and a consumer process, which reads the data from 262 On UNIX systems, ordinary pipes are constructed using the function 263 Figure File descriptors for an ordinary pipe 3.24 Figure 3.25 – Ordinary pipe in UNIX Ordinary pipes on Windows systems are termed anonymous pipes, and they behave similarly to their UNIX counterparts: they are unidirectional and employ parent–child relationships between the communicating processes. 268 Figure 3.27 – Windows anonymous pipe—parent 3.28 Named Pipes Named pipes extend the concept of ordinary pipes, providing a more versatile communication mechanism between processes. 273 Named pipes are referred to as FIFOs (First In, First Out) in UNIX systems. 274 A FIFO is created using the mkfifo() system call, and standard file operations like open(), read(), write(), and close() are used to interact with it. 275 Characteristics of UNIX FIFOs 276 Persistence A FIFO persists in the file system until it is explicitly deleted. 277 Half-Duplex Communication FIFOs allow only one-way communication at a time (half- duplex). 278 Local Communication FIFOs can only be used for communication between processes on the same machine. 279 Figure 3.9 Windows anonymous pipe—child process Named pipes on Windows systems offer a more robust communication mechanism than UNIX FIFOs. 281 Characteristics of Windows Named Pipes 282 Full-Duplex Communication Allows simultaneous two- way communication. 283 Remote Communication Processes on different machines can communicate using named pipes. 284 Data Orientation Windows named pipes can transmit either byte-oriented or message-oriented data, offering greater flexibility. 285 THANK YOU! God Bless us all! 286