Full Transcript

IT2105 Threads threads. Each process is represented by an object that includes Thread Functionality specific attributes and encapsulates...

IT2105 Threads threads. Each process is represented by an object that includes Thread Functionality specific attributes and encapsulates a number of actions or services A thread is a fundamental element of the central processing unit (CPU) that it may perform. A Windows process must also contain at least one utilization. It is generally composed of thread identification, a program counter, (1) thread to execute. This thread may then create other threads. a set of registers, and a stack. Threads that belong in the same process share Some attributes of a thread object resemble those of a process, and the same code section, data section, and other operating system (OS) in this case, the thread attribute value is actually derived from the resources. A traditional process has a single thread of control, and if a process process attribute value. has multiple threads of control, it can perform more than one task at a time (Silberschatz, Galvin, & Gagne, 2018). A thread should at least contain the Note that one of the attributes of a thread object is context, which following attributes: contains the values of the processor registers when the thread last o Thread execution state ran. This information enables threads to be suspended and resumed. o A saved thread context when not running Furthermore, in Windows OS, it is possible to alter the behavior of a o Some per-thread static storage for local variables thread by altering its context while it is suspended. o Access to the memory and resource of the process o An execution stack To differentiate the resource ownership and program execution (scheduling) characteristics of a process, particularly for recently developed systems, the As an example, the list of attributes below characterizes a Windows thread unit of resource ownership is usually referred to as process or task, while the object (Stallings, 2018): unit of dispatching processes is usually referred to as threads or lightweight processes. The following process and thread arrangement or relationship may Thread ID A unique value that identifies a thread when it calls a occur (Stallings, 2018): server One process: One thread (1:1) – Each thread of execution is a unique Thread context A set of register values and other volatile data that process with its own address space and resources. The MS-DOS and the defines the execution state of a thread Classic UNIX are examples of OS that support this kind of relationship. Dynamic priority The thread's execution priority at any given moment Multiple processes: One thread per process (M:1) – A thread may Base priority The lower limit of the thread's dynamic priority Thread processor affinity The set of processors on which the thread can run. migrate from one (1) process environment to another. This allows a thread Thread execution time The cumulative amount of time a thread has to be easily moved among distinct systems. Some variants of the UNIX executed in user mode and in kernel mode OS support this kind of arrangement. Alert status A flag that indicates whether a waiting thread may One process: Multiple threads (1:M) – A process defines a specific execute an asynchronous procedure call address space and a dynamic resource ownership. Thus, multiple threads Suspension count The number of times the thread's execution has been can be created and executed within the process. The Java runtime suspended without being resumed environment is an example of a system that encompasses this Impersonation token A temporary access token allowing a thread to arrangement. perform operations on behalf of another process Multiple processes: Multiple threads per processes (M:M) – This has Termination port An inter-process communication channel to which the combined attribute of the M:1 and 1:M arrangement. the process manager sends a message when the thread terminates Thread exit status The reason for a thread's termination The thread construct is also useful for systems with a single processor, since it simplifies the structure of a program that is logically performing several The object-oriented structure of Windows OS facilitates the different functions. Threads are applied in a single-user multiprocessing development of a general-purpose process facility that encompasses system for the enhancement of process execution, for implementing a modular 03 Handout 1 *Property of STI  [email protected] Page 1 of 4 IT2105 program structure, for asynchronous processing, and for other foreground and the environments of the other threads within the same process. It is therefore background work enhancement. necessary to synchronize the activities of various threads in the process to eliminate interference and maintain the data structure of the process. Thread State. The key states for a thread are Running, Ready, and Blocked. If a process is swapped out, noted that all of its threads are automatically Thread Library. Any application can be programmed to be multithreaded with swapped out because they all share the address space of the process. The the use of thread libraries. A thread library technically provides programmers following are the four (4) basic thread operations associated with a change in with an application programming interface (API) for creating and managing thread state (Stallings, 2018): threads. It may contain codes for creating and destroying threads; for passing 1. Spawn – This operation typically transpires whenever a new process messages and data between threads; for scheduling thread execution; and for is created, since a thread for that specific process is also formed. saving and restoring thread context/information. 2. Block – This happens when a thread needs to wait for a particular event. In this operation, all the necessary information are Types of Threads (Stallings, 2018) automatically saved for the thread's execution resumption. User-Level Threads 3. Unblocked – This operation moves a blocked thread into the ready In implementing user-level threads, all the thread management is done by the queue for the continuation of its execution. application. Tthe system kernel is not aware of the existence of the threads 4. Finish – This happens whenever a thread completes its entire and continues to schedule the process as a unit. Below are some advantages execution. Its register settings and stacks are deallocated. of implementing user-level threads: o Thread switching does not require kernel-mode privileges since the As an example, the thread states used by Windows are illustrated below: overall thread data management structure is within the user address space of a single process. Thus, saves the overhead of two (2) mode switches. o Scheduling can be application-specific. The scheduling algorithm can be tailored for the application without disturbing the operating system scheduler. o User-level threads can run on any operating system. There are no changes required to the underlying kernel in order to support user- level threads. Figure 1. The Windows thread states. Source: Operating Systems: Internal and Design Principles (9th ed.), 2018 p. 200 Thread Synchronization. All threads of a process share the same address Figure 2. A pure user-level thread. space and resources. Hence, any alteration of a resource by one thread affects Source: Operating Systems: Internal and Design Principles (9th ed.), 2018 p. 184 03 Handout 1 *Property of STI  [email protected] Page 2 of 4 IT2105 Kernel-Level Threads level threads. The Solaris is a good example of an operating system that In implementing kernel-level threads, all thread management is performed by implements the combined approach. the operating system's kernel. There are no thread management code at the application level, only an application programming interface (API) to the kernel thread section. The kernel maintains the context information for the process as a whole and for the individual threads within the process. In addition, scheduling by the kernel is done on a thread basis. Below are some of the advantages of implementing kernel-level threads: o The kernel can simultaneously schedule multiple threads from the same process on multiple processors. o If one thread in a process is blocked, the kernel can schedule another thread from the same process. o The kernel routines themselves can be multithreaded. Figure 4. A combined user-level thread and kernel-level thread. Source: Operating Systems: Internal and Design Principles (9th ed.), 2018 p. 184 Multithreading Multithreading pertains to the ability of an operating system (OS) to support multiple, concurrent paths of execution within a single process. The use of multicore systems to provision applications with multiple threads affects the application design and performance. The potential performance benefits of a multicore system may also be subjected to the ability of a multithreaded application to effectively exploit the parallel resources available to the application. Moreover, an object-oriented multithreaded process is an efficient Figure 3. A pure kernel-level thread. means of implementing a server application (Stallings, 2018). Source: Operating Systems: Internal and Design Principles (9th ed.), 2018 p. 184 Below are some of the general characteristics of multithreading (Gregg, 2021): Combined User-Level and Kernel-Level Approach The memory overhead in multithreading is small. It only requires an extra Some operating systems provide a combined user-level thread and kernel- stack, register space, and space for thread-local data. level thread facility. Thread creation is performed completely in the user space, The central processing unit (CPU) overhead is small since it uses so is the scheduling and synchronization of threads within an application. application programming interface (API) calls. Multiple user-level threads from a single application are then mapped onto The communication within the process and with other processes are specific kernel-level threads. faster. The crash resilience in implementing multithreading is low. Any bug can In this combined approach, multiple threads within the same application can crash the entire application run in parallel on multiple processors, and a blocking system call does not The memory usage is monitored via a system allocator, which may incur block the entire process. If properly designed, the combined approach should some CPU contention from multiple threads, and fragmentation before the also combine the advantages of the pure user-level threads and the kernel- memory is reused. 03 Handout 1 *Property of STI  [email protected] Page 3 of 4 IT2105 The following are some examples of multithreaded applications: An application that creates thumbnails of photos from a collection of images A web browser that displays images or texts while retrieving data from the network A word processor that displays texts and responds to keystrokes or mouse clicks while performing spelling and grammar check The benefits of multithreaded programming can be categorized as follows (Silberschatz, Galvin, & Gagne, 2018): Responsiveness: Incorporating the multithreading concept in an interactive application allows a program to continue running even if some part of it is blocked or performing a lengthy operation. Thus, increasing the responsiveness of the application to the user. Resource Sharing: Generally, threads share the memory and resources of the process to which they belong. This allows applications to have several different threads of activities within the same address space. Economical: Allocating memory and resources for process creation is inefficient, but thread creation consumes less time and memory, making it economical. Scalability: The benefit of multithreading is greater when dealing with multiprocessor architecture, where threads can run in parallel on different processing cores. Windows is a good example of an OS that supports multithreading. Threads in different processes may execute concurrently or appear to run at the same time. Additionally, multiple threads within the same process may be allocated to different processors and execute simultaneously. In a Windows OS, threads within the same process can exchange information through their common address space and access shared resources of the process. Threads in different processes can exchange information through some shared memory that has been set up between two (2) processes References: Gregg, B. (2021). System performance: Enterprise and cloud (2nd ed.). Pearson Education, Inc. Silberschatz, A., Galvin, P. & Gagne, G. (2018). Operating systems concepts (10th ed.). John Wiley & Sons, Inc. Stallings, W. (2018). Operating systems: Internal and design principles (9th ed.). Pearson Education Limited 03 Handout 1 *Property of STI  [email protected] Page 4 of 4 IT2105 Concurrency and Deadlocks 1. Multiple applications The core of operating system designs focuses on the management of 2. Structured applications processes and threads, such as the following (Stallings, 2018): 3. Operating system structure Multiprogramming – It is the management of multiple processes within a uniprocessor system. Below are some important terminologies related to concurrency: Multiprocessing - It is the management of multiple processes within a Atomic operation – It is a function or an action implemented as a multiprocessor. sequence of one (1) or more instructions that appear to be indivisible, Distributed processing – It is the management of multiple processes wherein no other process can see an intermediate state or interrupt the executing on multiple distributed computer systems. operation. In this operation, the sequence of instruction is guaranteed to execute as a group, or not execute at all, having no visible effect on the The basic requirement to support concurrent processes is the ability of a system state. Note that the concept of atomicity guarantees isolation from program to enforce mutual exclusion, which pertains to the ability to execute concurrent processes. all other processes from a course of action while one (1) process is granted Critical section – It is a section of code within a process that requires that ability. Any facility or capability that is to provide support for mutual access to shared resources, and must not be executed while other exclusion should meet the following requirements (Stallings, 2018): process is in a corresponding section of code. 1. Only one process at a time is allowed into its critical section, among Race condition – It is a situation in which multiple threads or processes all processes that have critical sections for the same resource or read and write a shared data item, and the final result depends on the shared object. relative timing of their execution. 2. A process that halts in its noncritical section must do so without Starvation – It is a situation in which a runnable process is overlooked interfering with other processes. indefinitely by the scheduler. Although the process is able to proceed, it is 3. No deadlock of starvation. Processes requiring access to a critical never chosen. section must not be delayed indefinitely. 4. When no process is in a critical section, any process that requests In a single-processor multiprogramming system, processes are interleaved in entry to its critical section must be permitted to enter without delay. time to yield the appearance of simultaneous execution. Even actual parallel 5. No assumptions must be made about the relative process speeds or processing is not achieved, and there is an overhead in switching back and number of processors. forth between processes, interleaved execution provides a major benefit in 6. A process shall remain inside the critical section for a finite time only. processing efficiency and program structuring. Principles of Concurrency It is possible to classify how processes interact based on the degree of their Concurrency is an application performance technique that encompasses the awareness about the other existing processes (Stallings, 2018). ability to load and execute multiple runnable programs. Each of these Competition: Processes unaware of each other – This comprises programs may be an application process that does not necessarily execute on independent processes that are not intended to work together, which can the central processing unit (CPU) at the same instance; even their runtimes either be batch jobs, interactive sessions, or a mixture of both. Some of may overlap (Gregg, 2021). the potential control problems in this process interaction set-up are mutual exclusion, deadlock, and starvation. The best example of this situation is A concurrent system supports more than one (1) task by allowing all the tasks the multiprogramming of multiple independent processes. to make progress. Concurrency involves an array of design issues, including Cooperation by sharing: Processes indirectly aware of each other – process communication, accessing and sharing of resources, synchronization This involves processes that are not necessarily aware of each other by of activities of multiple processes, and allocation of processor time to different their respective process identifications but shares access to some objects. processes. Concurrency can be viewed based on the following contexts: The result of one (1) process may depend on the information obtained 04 Handout 1 *Property of STI  [email protected] Page 1 of 4 IT2105 from the other processes. Some of the potential control problems in this locks the mutex must be the one to unlock it, and only the holder of the process interaction set-up are mutual exclusion, deadlock, starvation, and lock can operate. data coherence. Condition Variable – This is a data type that is used to block a process Cooperation by communication: Processes directly aware of each or a thread until a specific condition is true. other – This encompasses processes that are able to communicate with Monitor – This is a programming construct that encapsulates variables, each other by process identification and that are designed to work jointly access procedures, and initialization code within an abstract data type. It on some activity. Some of the potential control problems in this process is easier to control and has been implemented in a number of interaction set-up are deadlock and starvation. programming languages such as Java and Pascal-Plus. The monitor's variable may only be accessed via its access procedures, and that only Below are some operating system (OS) concerns that are raised by the one (1) process can actively access the monitor at any given time. existence of concurrency (Stallings, 2018): Event Flag – It is a memory word used as a synchronization mechanism. 1. The OS must be able to keep track of the various processes through A specific application code is associated with each bit in a flag. A thread the utilization of process control blocks. can wait for either a single event or a combination of events by checking 2. The OS must allocate and deallocate different resources for each one (1) or multiple bits in the corresponding flag. active process. Mailbox or Message Passing – This mechanism is considered as a 3. The OS must protect the data and physical resources of each process means for two (2) processes to exchange information, and that may be against unintended interference by other processes. used for process synchronization. 4. The operation of a process, and the output it produces, must be Spinlock – This is a mechanism in which a process executes in an infinite independent of the speed at which its execution is carried out relative loop waiting for the value of a lock variable to indicate availability. to the speed of other concurrent processes. Principles of Deadlocks (Stallings, 2018) Common Concurrency Mechanisms (Stallings, 2018) Deadlocks can be defined as permanent blocking of a set of processes that The first major advancement in dealing with concurrency-related problems either compete for system resources or communicate with each other. A set of came in around 1965 with Dijkstra's written formal works. Dijkstra's work processes is deadlocked when each process in the set is blocked waiting for focuses on operating system (OS) design as a collection of cooperating an event, typically freeing up some requested resource that can only be sequential processes, with the development of an efficient and reliable triggered by another blocked process in the set. Note that all deadlocks involve mechanism for supporting cooperation. conflicting needs for resources by two (2) or more processes. Below are the common OS and programming language mechanisms that are The following are the general resource categories: used to provide or support concurrency: Reusable resources – These resources can be used by only one process Counting Semaphore – This involves an integer value that is used for at a time and are not depleted by usage. As an example of a deadlock signaling processes. Only three (3) operations may be performed on a involving reusable resources, consider two (2) programs that compete for semaphore, all of which are atomic: initialize, decrement, and increment. exclusive access to a disk file D and a tape drive T. Deadlock occurs if The decrement operation may result in the blocking of a process, and the each process holds one (1) resource and requests the other. increment operation may result in the unblocking of a process. This Consumable resources – These are resources that can be created concurrency mechanism is also known as the general semaphore. (produced) and destroyed (consumed). An unblocked producing process Binary Semaphore – This is a semaphore that only takes the values zero may create any number of resources. Then, when a resource is acquired (0) and one (1). by a consuming process, the resource ceases to exist. As an example of Mutual Exclusion (Mutex) Lock – This mechanism is similar to a binary a deadlock involving consumable resources, consider a pair of processes semaphore. The key difference between the two is that the process that in which each process attempts to receive a message from the other 04 Handout 1 *Property of STI  [email protected] Page 2 of 4 IT2105 process, then sends a message to the other. Deadlock occurs if the receiving process is blocked until the message is received. Figure 2. Examples of resource allocation graph. Source: Operating Systems: Internal and Design Principles (9th ed.), 2018 p. 297 Figure 1. Example of a deadlock. Deadlock Prevention, Avoidance, and Detection Source: Operating Systems: Internal and Design Principles (9th ed.), 2018 p. 291 The following conditions must be present for a deadlock to occur (Silberschatz, Galvin & Gagne, 2018): The resource allocation graph, which was introduced by Richard Holt, is a 1. Mutual exclusion: At least one resource must be held in a non- useful tool in characterizing the allocation of resources to processes. It is a sharable mode. This means that only one (1) thread at a time can use directed graph that depicts the state of system resource processes, wherein the resource. If another thread requests that specific resource, the processes and resources are represented by nodes connected by edges. A requesting thread must be delayed until the resource has been resource allocation graph can be described as follows: released. o An edge directed from a process to a resource indicates a resource 2. Hold and wait: A thread must be holding at least one (1) resource and that has been requested by the process but not yet granted. waiting to acquire additional resources that are currently being held by o A dot within the resource node represents an instance of a resource. other threads. o An edge directed from a reusable resource node dot to a process 3. No preemption: Resources cannot be preempted. This means that a indicates a request that has been granted, and one (1) unit of the resource can only be released voluntarily by the thread holding it after resource has been assigned to the process. that thread has completed its task. o An edge directed from a consumable resource node dot to a process 4. Circular wait: A closed chain of threads exists, such that each thread indicates the process is the procedure of that resource. holds at least one resource needed by the next thread in the chain. The first three (3) conditions are necessary, but not sufficient, for a deadlock to occur. The fourth condition is required for a deadlock to actually take place. Technically, the fourth condition is the potential consequence of the first three conditions. Deadlock can also be described as an unresolved circular wait. 04 Handout 1 *Property of STI  [email protected] Page 3 of 4 IT2105 Deadlock Prevention: Disallows one of the four conditions for deadlock The processes under consideration must be unconstrained by any occurrence – This strategy of involves the designing of a system in such a synchronization requirements; way that the possibility of deadlock is excluded (Stallings, 2018). There must be a fixed number of resources to allocate; and A. Indirect method of deadlock prevention (preventing the first three No process may exit while holding resources. conditions) o Mutual exclusion: In general, this condition cannot be disallowed. If Deadlock Detection: Grant resource requests when possible, but access to a resource requires mutual execution, then mutual exclusion periodically check for deadlock and act to recover – This strategy does not must be supported by the operating system. limit resource access or restricts process executions. Requested resources o Hold and wait: This condition can be prevented by requiring a process are granted to processes whenever possible. Then, the operating system to request all of its required resources at once and blocking the periodically executes an algorithm that detects the presence of a circular wait process until all resources can be granted simultaneously. condition. Once the deadlock has been detected, any of the following recovery o No preemption: This condition can be prevented through the following methods can be performed, whichever is more appropriate for the program ways: (Stallings, 2018):  If a process is denied of further request, that process must release a. Aborting all deadlock processes is the most common solution in operating the resources that it is currently holding; systems.  If necessary, request the process again with the additional b. Back up each deadlocked process to some previously defined checkpoint, resources; and and restart all processes. This requires that the rollback and restart  Let go of other resources in order to proceed with other process mechanisms are built into the system. However, the original deadlock may execution. recur. B. Direct method of deadlock prevention (preventing the occurrence of the c. Successively abort deadlocked processes until deadlock no longer exists. fourth condition) After each abortion, the detection algorithm must be reinvoked to see o Circular wait: This condition can be prevented by defining a linear whether a deadlock still exists. ordering of resource types. d. Successively preempt resources until deadlock no longer exists. A process that encompasses preempted resource must be rolled back to the Deadlock Avoidance: Do not grant a resource request if the allocation point prior to its resource acquisition. might lead to a deadlock condition – This strategy allows the three necessary conditions but makes judicious choices to assure that the deadlock The selection criteria in successively aborting deadlocked processes or point is never reached. Thus, allowing more concurrency. With deadlock preempt resources could be one of the following: avoidance, decisions are made dynamically. Hence, knowledge of future Process with the least amount of processor time consumed so far process resource requests must be known. Two (2) approaches of deadlock Process with least amount of output produced so far avoidance are as follows (Stallings, 2018): Process with the most estimated remaining time A. Process initiation denial: Do not start a process if its demands might Process with the least total of resources allocated so far lead to a deadlock; and Process with the lowest priority B. Resource allocation denial: Do not grant an incremental resource request to a process if this allocation might lead to a deadlock. References: Gregg, B. (2021). System performance: Enterprise and cloud (2nd ed.). Pearson Education, Inc. Below are some restrictions in implementing the deadlock avoidance strategy: Silberschatz, A., Galvin, P. & Gagne, G. (2018). Operating systems concepts (10th ed.). John Wiley & Sons, Inc. Stallings, W. (2018). Operating systems: Internal and design principles (9th ed.). Pearson Education Limited The maximum resource requirement for each process must be stated in advance; 04 Handout 1 *Property of STI  [email protected] Page 4 of 4

Use Quizgecko on...
Browser
Browser