Operating Systems 24CSCI03I Past Paper PDF

Document Details

PrestigiousImpressionism4967

Uploaded by PrestigiousImpressionism4967

The British University in Egypt

Wessam El-Behaidy

Tags

operating systems threads concurrency computer science

Summary

This document is a set of lecture notes on operating systems, specifically focusing on threads, concurrency, and interprocess communication. It covers different thread models like many-to-one, one-to-one, and many-to-many. The document also explores the concept of race conditions and how to avoid them. The notes are detailed and well-structured, offering a comprehensive overview of their topics.

Full Transcript

OPERATING SYSTEMS 24CSCI03I By Wessam El-Behaidy Associate Professor, Computer Science Main Reference Operating System Concepts, Abraham Silberschatz, 10th Edition Threads & Concurrency Threads - Revi...

OPERATING SYSTEMS 24CSCI03I By Wessam El-Behaidy Associate Professor, Computer Science Main Reference Operating System Concepts, Abraham Silberschatz, 10th Edition Threads & Concurrency Threads - Review ▪ Modern OSs, provide features enabling a process to contain multiple threads of control. Most modern applications are multithreaded Threads run within application ▪ A process can have multiple threads of control, ▪ To perform more than one task at a time. Motivation ▪ Applications can be designed to utilize processing capabilities on multicore systems. can perform several CPU-intensive tasks in parallel across the multiple computing cores. ▪ A single application may be required to perform several tasks. a web server accepts thousands of clients concurrently. If the web server ran as a single- threaded process, it would be able to service only one client at a time. ▪ Creating a new process is heavy. Multithreaded Web Server Architecture Threads - Review ▪ Threads are lightweight processes as: They have their own ID, PC, a register set, and stack But they share with other threads in the same process code, data, heap sections, and other OS resources, such as open files, permissions… Program counter Threads - Review ▪ Each thread belongs to exactly one process and no thread can exist outside a process. ▪ A thread in process P cannot reference a thread in process Q. Threads switching ▪ Thread switching is a type of context switching from one thread to another thread in the same process. ▪ It is very efficient and much cheaper because It involves switching out only identities and resources such as the program counter, registers and stack pointers. While processes switching involves switching of all the process resources. Such as, memory addresses, page tables, and kernel resources, caches in the processor. Types of Threads ▪ User Level Threads − management done by user. ▪ Kernel Level Threads − Operating System manages threads acting on kernel. Supported by virtually all general - purpose operating systems, including: Windows, Linux, Mac OS X, iOS, Android ▪ User threads are simply called threads, and lightweight process refers to kernel threads. User Threads ▪ Kernel is not aware of the existence of these threads. It handles them as if they were single-threaded processes. ▪ User-level threads are much faster than kernel level threads. There are no kernel mode privileges required for thread switching. ▪ But Cannot use multiprocessing to their advantage. The entire process is blocked if one user-level thread performs blocking operation. Kernel Threads ▪ The application has no direct control over these threads The Kernel performs thread creation, scheduling, switching and management in kernel space. requires a mode switch to the Kernel. ▪ Kernel threads are generally slower to create and manage than the user threads. ▪ Kernel threads are strongly implementation-dependent. Kernel Threads ▪ A kernel thread is the schedulable entity, which means scheduling by the Kernel is done on a thread basis. Kernel can simultaneously schedule multiple threads from the same process. If one thread in a process is blocked, the Kernel can schedule another thread of the same process. The context information for the process as well as the process threads is all managed by the kernel ➔ generally slower. Multithreading Models ▪ A relationship must exist between user threads and kernel threads. Operating systems provide a combined user level thread and Kernel level thread facility. ▪ Multithreading models are three types: Many-to-One One-to-One Many-to-Many Many-to-One Model ▪ Many user-level threads mapped to single kernel thread ▪ Its drawbacks: One thread can access the kernel at a time, multiple threads are unable to run in parallel on multicore systems The entire process will block if a thread makes a blocking system call. ▪ Few systems currently use this model. ▪ Because of its inability to take advantage of multiple processing cores. One-to-One Model ▪ Each user-level thread maps to kernel thread ▪ Advantage: More concurrency, another thread to run when a thread makes a blocking system call ▪ Its drawbacks: Creating a user thread requires creating a kernel thread Number of threads per process sometimes restricted due to overhead The developer has to be careful not to create too many threads ▪ Examples Windows Linux Many-to-Many Model ▪ Allows many user level threads to be mapped to a smaller or equal number of kernel threads. ▪ Allows the operating system to create a sufficient number of kernel threads. ▪ Although the many-to-many model appears to be the most flexible, it is difficult to implement. ▪ one-to-one model is more commonly used Thread Libraries ▪ Thread library provides programmer with API for creating and managing threads ▪ Three main thread libraries are in use today: POSIX Pthreads: provided as either a user-level or a kernel-level library under UNIX, Linux,... Windows: a kernel-level library Java: Java thread API is generally implemented using a thread library available on the host system.  Any Java program – even consisting of only a main() - comprise at least a single thread in the JVM. Java threads are available on any system that provides a JVM Threads Benefits ▪ Responsiveness – may allow continued execution if part of process is blocked, especially important for user interfaces ▪ Resource Sharing – threads share resources of process, easier than shared memory or message passing between processes. ▪ Economy – cheaper than process creation, thread switching lower overhead than context switching ▪ Scalability – process can take advantage of multicore architectures There are also drawbacks! In particular it can be difficult to write correct multithreaded programs against the risk of race conditions. Multicore Programming ▪ On system with more than one core, multithreading may lead to multiple CPU instructions of the same program being executed at the same time → parallelism ▪ Beware of the difference between: Parallelism implies a system can perform more than one task simultaneously Concurrency supports more than one task making progress OS can give the illusion of parallelism on a single processor/core, by alternating quickly between tasks ▪ Thus, it is possible to have concurrency without parallelism. Concurrency vs. Parallelism ▪ Concurrent execution on single-core system: ▪ Parallelism on a multi-core system: Multicore Programming - Type of parallelism ▪ Data parallelism – distributes subsets of the same data across multiple cores, same operation on each ▪ Task parallelism – distributing threads across cores, each thread performing unique operation Interprocess Communication (IPC) How can processes coordinate and interact with one another? Interprocess Communication ▪ Processes within a system may be independent or cooperating Cooperating process can affect or be affected by other processes, including sharing data ▪ Reasons for cooperating processes: Information sharing Since several applications may be interested in the same piece of information (for instance, copying and pasting) Computation speedup Modularity Interprocess Communication ▪ Processes within a system may be independent or cooperating Cooperating process can affect or be affected by other processes, including sharing data ▪ Reasons for cooperating processes: Information sharing Computation speedup Only if the computer has multiple processing cores, process may be broken it into subtasks, each of which will be executing in parallel with the others. Modularity Interprocess Communication ▪ Processes within a system may be independent or cooperating Cooperating process can affect or be affected by other processes, including sharing data ▪ Reasons for cooperating processes: Information sharing Computation speedup Modularity Itmay be wanted to construct the system in a modular fashion, dividing the system functions into separate processes or threads Interprocess Communication ▪ Cooperating processes need interprocess communication (IPC) ▪ Two models of IPC: Shared memory Message passing An area of memory Queue of messages shared among the between processes processes Interprocess Communication ▪ Cooperating processes need interprocess communication (IPC) ▪ Two models of IPC Shared memory Message passing Processes communicate with each other without sharing address space Useful for exchanging smaller amounts of data, because no conflicts need be avoided. Easier to implement in a distributed system – Ex: an Internet chat program IPC – Message Passing ▪ IPC facility provides two operations: send(message) Receive(message) ▪ If processes P and Q wish to communicate, they need to: Establish a communication link between them Direct or indirect communication Synchronous or asynchronous communication Automatic or explicit buffering Interprocess Communication ▪ Cooperating processes need interprocess communication (IPC) ▪ Two models of IPC Shared memory Can be faster – since message-passing systems are implemented using system calls and thus require the more for kernel intervention. – In shared-memory systems, system calls are required only to establish shared memory regions. Message passing IPC – Shared Memory ▪ Normally, the OS tries to prevent one process from accessing another process’s memory. ▪ Shared memory IPC requires that two or more processes agree to remove this restriction. The communication is under the control of the users processes not the operating system. The code for accessing the shared memory be written explicitly by the application programmer. The form and the location of the data are determined by these processes and are not under the OS’s control. The processes are also responsible for ensuring that they are not accessing the same location simultaneously- race condition. Let’s understand RACE CONDITION PROBLEM Producer-Consumer Problem ▪ Classic paradigm for cooperating processes: Producer process produces information that is consumed by a consumer process.  The printer is the consumer, waiting for tasks (print jobs) from different sources (producers) like computers or users. Intermediate share buffer used to pass produced stuff from producer to consumer Producer-Consumer Problem ▪ Two variations: Unbounded-buffer places no fixed limit on the size of the buffer:  Producer never waits  Consumer waits if there is nothing to consume Bounded-buffer assumes that there is a fixed buffer size  Producer waits if buffer is full  Consumer waits if there is nothing to consume Bounded-Buffer– Shared Memory Impl. ▪ Let’s provide an implementation of bounded-buffer producer-consumer in C ▪ Using shared-memory IPC between producer and consumer E.g., two threads in the same process Shared data: Bounded-Buffer– Shared Memory Impl. Producer code: Consumer code: Q: is this implementation correct? Bounded-Buffer– Shared Memory Impl. Producer code: register1 =avail_items register1 =register1+1 Consumer code: avail_items=register1 register2 =avail_items register2 =register2-1 avail_items=register2 Race Condition ▪ Consider this execution interleaving with “avail_items = 5” initially: S0: producer execute register1 = avail_items {register1 = 5} S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = avail_items {register2 = 5} S3: consumer execute register2 = register2 – 1 {register2 = 4} S4: producer execute avail_items = register1 {avail_items = 6} S5: consumer execute avail_items = register2 {avail_items = 4} Notice that We have arrived at the incorrect state “avail_items == 4”, indicating that four buffers are full, when, in fact, five buffers are full. This is what we called a race condition Race Condition ▪ Processes can execute concurrently ▪ May be interrupted at any time, only partially completing execution before interruption ▪ Concurrent access to shared data may result in data inconsistency. More formally: A race condition is a situation where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place. ▪ Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes. ▪ We will study some of these mechanisms in next lecture. Reading List ▪ You should study on books, not slides! Reading material for this lecture is: Silberschatz, Galvin, Gagne. Operating System Concepts, Tenth Edition: Chapter 3: IPC (sec. 3.4, 3.5) Chapter 4: Threads and Concurrency (Sec. 4.1 to 4.3) Chapter 6: Synchronization Tools (Sec. 6.1) ▪ Credits: Some of the material in these slides is reused (with modifications) from the official slides of the book Operating System Concepts, Tenth Edition, as permitted by their copyright note. Start to solve questions and practical exercise related to covered parts from chapter 3, 4 and 5 (how to calculate TAT, CPU utilization). Do the same for further lectures as well VERY IMPORTANT TO DO Thank You

Use Quizgecko on...
Browser
Browser