COMP2211 Operating Systems Combined Slides (University of Leeds) 2024/25
Document Details
Uploaded by WorthyWildflowerMeadow473
University of Leeds
2024
Mantas Mikaitis
Tags
Related
- Artificial Intelligence & Data Science S.E. Operating Systems Past Paper PDF - 2019 Pattern
- Operating Systems Fundamentals PDF
- Computer Science: Chapter 3 - Operating Systems (PDF)
- Computer Science: Operating Systems PDF
- Computer Science DSC and GE Sem I-II PDF
- OCR Computer Science A Level 1.2.1 Systems Software Notes PDF
Summary
These slides cover the COMP2211 Operating Systems module at the University of Leeds for the 2024/25 academic year. The document details the module's structure, including lectures, labs, and assignments, along with key concepts like interrupts and the xv6 operating system.
Full Transcript
COMP2211 Operating Systems Combined slides Mantas Mikaitis School of Computer Science, University of Leeds, Leeds, UK Semester 1, 2024/25 M. Mikaitis (Lee...
COMP2211 Operating Systems Combined slides Mantas Mikaitis School of Computer Science, University of Leeds, Leeds, UK Semester 1, 2024/25 M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 1 / 407 Objectives To introduce the structure of COMP2211. Talk about the coursework and exam. Describe organisation of a computer system and interrupts. Discuss the components in a modern multiprocessor system. Introduce user mode and kernel mode. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 2 / 407 Part I: Introduction to the Module M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 3 / 407 Structure of COMP2211: Staff Mantas Mikaitis (me) Tom Richardson M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 4 / 407 Structure of COMP2211: Lectures Week Topic 1 (current) Introduction to OS 2 OS services 3 Processes 4 Xv6: Live coding and Q&A from the xv6 book 5 Reading week. No scheduled labs or lectures 6 Threads and concurrency. Assignment due this week. 7 Scheduling 8 Process synchronisation 9 Memory management 10 Catch up 11 Module review M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 5 / 407 Structure of COMP2211: Times and Places Lectures Mon @ 2pm, Thu @ 12pm in Michael Sadler RBLT (LG.X04). Labs Mon-Tue-Thu-Fri in Bragg 2.05. You will find one of the lab slots in your timetable. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 6 / 407 Structure of COMP2211: Reading List We will be mainly using the Operating System Concepts (OSC) 10th ed., 2018, and the 3rd edition of the xv6 book (XV6), 2022. These slides are based on OSC (see the reference list). Week Reading materials 1 (current) Chapter 1 OSC. Chapter 1 XV6. 2 Chapter 2 OSC. Chapter 2 XV6. 3 Chapter 3 OSC. 4 Reread Chapters 1–2 XV6. 5 Reread Chapters 1–3 OSC. 6 Chapter 4 OSC. 7 Chapter 5 OSC. 8 Chapter 6 OSC. 9 Chapters 9–10 OSC. Chapter 3 XV6. 10 Reread Chapters 4–6 OSC. 11 Reread Chapters 9–10 OSC. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 7 / 407 Structure of COMP2211: Laboratories Laboratories will be in C and Bash. Download the lab manual from Minerva. Weeks 1–3 contain introduction to xv6 operating system. Weeks 4–6: you will work on a 40% assignment. Weeks 7–11 contain further formative assessment exercises. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 8 / 407 Structure of COMP2211: Xv6 operating system XV6 is a small teaching operating system from MIT. The source code is readable and editable. It runs on RISC-V architectures. To run it on Intel/AMD CPUs we emulate RISC-V with qemu. It is written in C (94.4%) with some RISC-V assembly (3.4%). M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 9 / 407 Structure of COMP2211: Xv6 operating system We use it by entering commands, similarly as with the Unix machines in the 2.05 lab, but more limited. The command line interpreter recognizes bash commands. In the labs you will extend your copy of xv6 to have more commands. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 10 / 407 Structure of COMP2211: Programming languages Mainly C. A lot of character handling. Pointers and double pointers. Arrays of characters and strings. Dynamic memory allocation. Xv6 specific process creation and command execution. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 11 / 407 Structure of COMP2211: Vevox We will be using Vevox in the lectures a lot, to do quizzes. Let’s try it. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 12 / 407 Structure of COMP2211: Assessment Deadline 2pm Nov. 7 (W6): 40% module mark Task: write your own command line interpreter (Shell) for xv6 that can perform various commands, such as ls or cd, redirect standard output to files, and other advanced features. The formative exercises on weeks 1–3 and reading of Chapters 1–2 of the xv6 book are essential for this assignment. The assignment will be automatically marked on Gradescope by running a set of commands and checking whether the output from the submitted shell is as expected. January paper-based 2-hour exam: 60% module mark Reading lecture slides and OSC is essential to succeed. Engaging with laboratory material can also help mastering the learning outcomes. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 13 / 407 Structure of COMP2211: Communication and feedback Please email me with your questions. Ask staff in the lab, who can forward me questions/comments without revealing your identity. Feedback: informal mid-module and formal end-of-module surveys. Feedback welcome Feel free to leave me feedback after lectures and labs and I will try to implement changes as we go along. For example, tell me: present slower, explain a specific topic again or differently, supply slides in different colour theme, do less/more quizzes, discuss this piece of code in class,... M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 14 / 407 Structure of COMP2211: Connection between labs and lectures? Lectures cover general OS concepts. Laboratories focus on xv6, which has used some of those concepts. Do not look for every concept from the lectures to be in xv6. You will notice the theory being of use in your further studies, interviews, placements, graduate roles, other modules,... M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 15 / 407 Structure of COMP2211: What is in the exam? All topics addressed in the lectures can appear in the January exam. Material appearing in the lectures is examinable based on the OSC contents of appropriate Chapters/Sections. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 16 / 407 Structure of COMP2211: Quiz (5 minutes) M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 17 / 407 Part II: Introduction to Operating Systems M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 18 / 407 Operating Systems: Main Definitions A computer system can be divided into four components: Hardware Operating system Application programs User OS is a resource allocator Hardware (CPU, memory, mouse, keyboard,...) are resources. Multiple applications running on the system compete for them. Operating System coordinates hardware use among users and applications. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 19 / 407 Operating Systems: Main Definitions M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 20 / 407 Operating Systems: Main Definitions User view: Laptop or PC that consists of monitor, keyboard, mouse. One user that wants to use all of the resources. OS designed for ease of use rather than resource utilization. Many users interact with mobile devices: touch screen, voice recognization. Embedded systems Some computers have little or no user view: home appliances, various devices in cars, and other specialized computers that almost work on their own. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 21 / 407 Operating Systems: Main Definitions System view: Resource allocator, involved with hardware intimately. Manages CPU time, memory space, storage space, I/O access. Faces several requests—has to decide who gets the resources and who waits (users, applications). Responsible for overall efficient operation of the system. Control program Different view of OS. Manages the control of programs to prevent errors and improper use of the hardware. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 22 / 407 Operating Systems: Main Definitions Operating systems arose due to the growth of complexity of computer hardware. Moore’s Law correctly predicted in the 1960s that the number of transistors on an integrated circuit would double every 18 months. The size shrank and the functionality has grown—now the uses are very varied and OS is essential to manage the complexity. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 23 / 407 Operating Systems: Main Definitions M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 24 / 407 Operating Systems: Main Definitions A common definition of operating systems is that it is the one program that always runs, a kernel. Alongside are system programs, associated with OS but not part of kernel, and application programs. Novadays OS includes many things outside the immediate definition of what the OS does: browsers, photo viewers, word processors,... Operating System A kernel (always running). Middleware frameworks that allow development of applications. System programs that aid in various OS tasks. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 25 / 407 Structure of COMP2211: Discussion with peers (5 minutes) Consider the main components of a computer system below, again. Discuss with your peers how each of those would look in a washing machine system: User interaction? Applications? OS? Hardware resources? Programming them? M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 26 / 407 Part III: Concept of Interrupts M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 27 / 407 Computer-System Organization Many devices competing for memory access. OS uses device drivers to talk to various controllers. Memory has a memory controller which also does some managing to keep up with many reads and writes at once. We now go deeper into various concepts within this system. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 28 / 407 Interrupts Consider a series of events within the system when we press a key on a keyboard. This constitutes input/output (I/O). 1 Device driver writes to appropriate registers (memory locations) in the device controller. 2 The controller reads to see what needs to be done (e.g. read a character from the keyboard). 3 Controller starts transfer of data from the keyboard. 4 Controller informs the driver that a transfer has been done. 5 Driver gives control to OS, to read the data. How does the controller inform the driver (CPU) that it has finished an operation? Through an interrupt. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 29 / 407 Interrupts Hardware may trigger interrupts at CPU at any time. CPU then stops what it is doing and checks if it can service the interrupt, through the interrupt service routine. When serviced, CPU goes back to what it was doing. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 30 / 407 Interrupts CPU may need to store the current program counter, for example into the link register or stack—to get back to what it was doing before the interrupt. The interrupt routine has to save any CPU state that it will be changing, and return it back to what it was once finished. Once done, the program counter and the original program execution continue. Program Counter (PC) register Stores the address of the next instruction that the CPU will execute. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 31 / 407 Interrupts CPU I/O controller 1 device driver initiates I/O 2 initiates I/O CPU executing checks for interrupts between instructions 3 CPU receiving interrupt, 4 input ready, output transfers control to complete, or error interrupt handler generates interrupt signal 7 5 interrupt handler processes data, returns from interrupt 6 CPU resumes processing of interrupted task M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 32 / 407 Interrupts Requirements for an effecting interrupt system: Capability to defer interrupts when something more important is being done on CPU. Efficient way to service interrupts—can’t take too long to respond. Structure of high and low priority interrupts. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 33 / 407 Interrupts: Advanced Material When interrupt occurs, correct interrupt service routine needs to be discovered. There can be hundreds to search through, but we need to be fast. Instead of searching, a table of pointers to interrupt service routines can be used: interrupt vector. A unique ID on interrupts is indexing this vector, which sends CPU straight to the correct interrupt service routine. Interrupt state save and restore Assume that an interrupt service routine for keyboard input interrupt is using CPU registers R1–R7 for internal operations. Before doing anything, R1 to R7 have to be stored in memory (pushed to stack), and once the interrupt is finished, they should be restored. If this is not done, the CPU will return to its previous state but will potentially crash because the carpet was moved from under its feet—the registers suddenly changed! M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 34 / 407 Interrupts: Advanced Material In reality, the vectors get too big and a hybrid approach is used, interrupt chaining. Interrupt routines point to the next interrupt routine: we loop through them until the right one is found. Nonmaskable interrupts: unrecoverable errors, such as memory read/write faults. Maskable interrupts: CPU can turn these off before starting some critical code section. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 35 / 407 Interrupts: Intel processor event-vector table M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 36 / 407 Interrupts: Common Terms Raise an interrupt: ask CPU to stop what it is doing and do something for me. Catch an interrupt: CPU discovers someone wants processing time. Dispatch the interrupt: call interrupt handler. Clear the interrupt: call the correct interrupt service routine. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 37 / 407 Interrupts: Vevox quiz (5 minutes) M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 38 / 407 Part IV: Storage and Memory M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 39 / 407 Storage Structure CPU can only load instructions from memory. Programs therefore must be loaded into memory to run. Usually programs loaded to random-access memory (RAM). Computers use other memory as well. Since RAM is volatile (contents lost when power is off) we cannot trust it to hold for example the bootstrap program, which runs on power on. Read-only EEPROM memory—slow and rarely changed memory that preserves contents on power off. Iphones for example use EEPROM to store serial numbers and other hardware information. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 40 / 407 Storage Structure: Reminder of Units A bit is a basic unit of storage. A byte is 8 bits. A word is one or more bytes, varies between computer architectures. Register width and instruction size usually constitutes how large is a word. Kilobytes, megabytes, gigabytes, terabytes, petabytes,... Or in fact, correct International Organization for Standardization (ISO) binary prefixes adopted in 2008 are: Kibibytes, mebibytes, gibibytes, tebibytes, pebibytes,... 1024, 10243 ,... bytes. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 41 / 407 Storage Structure Memory is laid out in arrays of bytes, which have addresses. CPU interacts with memory by load and store instructions addressing specific bytes or words. Bytes or words are moved between the CPU registers and the memory. Similarly, CPU loads instructions from memory automatically, addressed by the program counter. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 42 / 407 Storage Structure You may have heard about von Neumann architecture. Instruction-execution cycle: fetch instruction, execute, repeat. First CPU fetches an instruction from a program in memory, to an instruction register. Then it decodes the instruction and executes it in the hardware. The result may be stored back in memory. Main memory Ideally we would like the programs and data to be in fast RAM memory. This is not possible due to volatility of the memory and relatively small size. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 43 / 407 Storage Structure: Secondary storage Data is stored in secondary storage, which preserves programs and data while the system is off. Hard-disk drives (HDD). Nonvolatile memory (NVM) devices. Other storage CDs, cache, Blu-ray disks, magnetic tapes,... M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 44 / 407 Storage Structure: Hierarchy Operating Systems have to balance all of these storage types for the whole system to work efficiently and reliably. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 45 / 407 I/O Structure Interrupt driven memory access is fine for small requests, but moving a lot of data will not work very well. Direct Memory Acces (DMA) is used to offload work from the CPU. Device controller directly transfers data to and from the device and main memory, without holding the CPU while doing so. One interrupt is generated per transfer, to tell the device driver that the operation has completed. Better than interrupt for every byte. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 46 / 407 I/O Structure: Direct Memory Access M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 47 / 407 Part V: Single and Multi-processor Systems M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 48 / 407 Single-Processor Systems Single processor containing one CPU with a single processing core - many years ago. The core is the main piece of hardware within a CPU that executed program instructions and managed register storage locally. General purpose or domain specific: can run general programs or can run a limited set of operations optimized for some task/s. A computer system may have one general purpose single-processor CPU and multiple domain-specific processors that accelerate some specific tasks. From the perspective of OS, this system is still a single processor. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 49 / 407 Multiprocessor Systems Multiprocessor systems dominate the computer landscape novadays. Two or more processors, each with a single-core CPU. The main goal is to increase throughput—how much work can we do in a certain amount of time. Ideally N processors should result in N times speed up. In reality it is less: there is some overhead in managing multiple processors that cooperate on some task. This overhead does not exist when only one processor is executing. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 50 / 407 Symmetric multiprocessing (SMP) M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 51 / 407 Multiprocessor Systems The definition gets more complicated today: multicore systems. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 52 / 407 Multiprocessor Systems: Main Terms CPU — The hardware that executes instructions. Processor—A physical chip that contains one or more CPUs. Core — The basic computation unit of the CPU. Multicore — Including multiple computing cores on the same CPU. Multiprocessor — Including multiple processors. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 53 / 407 Single/multiprocessing: Discussion with peers (5 minutes) Find out how many processors and CPUs there are in your chosen personal device (laptop, mobile phone). Discuss with your peers and compare. At the end volunteers welcome to tell us the details about their CPU. Does anyone in the room have a device with one CPU or even one core? M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 54 / 407 Part VI: Key Concepts for Operating Systems M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 55 / 407 Operating-System Operations As noted earlier, bootstrap program is a key component that starts a computer: Stored in nonvolatile memory. Initializes CPU registers, device controllers, memory contents. Loads and starts executing the OS: locate the kernel and load into the main memory. Once the kernel is loaded, it can start providing services to the system and its users. System daemons also run “always”, alongside the kernel, and provide various system services. Once the kernel and daemons are running, the OS is waiting for I/O device requests and other tasks to do. It can sit quietly if nothing is happening. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 56 / 407 Operating-System Operations Back to interrupts: Hardware interrupts: we looked at these. I/O interrupts and other devices. Trap interrupts: software-generated interrupt: for example, division by zero or invalid memory address being accessed. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 57 / 407 Multiprogramming A single program cannot keep CPU or I/O devices busy at all times—the ability to run multiple programs and change between them is multi programming. It increases CPU utilization by swapping which program in execution (a process) gets the CPU time. When one process stops executing and starts waiting for I/O to finish, CPU is allocated to another process that is ready to run. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 58 / 407 Multitasking Similar to multiprogramming, but the switches between processes are very frequent to provide users with a fast response time. Processes Having multiple running programs, processes, requires some form of memory management. We also need a set of rules for deciding which process gets run (scheduling). Processes should also not interfere with other processes. These issues will be addressed in later lectures. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 59 / 407 Multiprogramming: Memory Layout M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 60 / 407 User and Kernel Modes of Operation Main idea Incorrect or malicious program should not be able to break the OS, execute code that belongs to OS services, or take over the hardware resources. To avoid these problems, OS can execute code in user mode and in kernel mode (also known as system/supervisor/privileged mode). OS services and the kernel are executed in the system mode, while user programs are in user mode. Once program requests some important resources, it can go into the kernel mode for some specific tasks, system calls. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 61 / 407 User and Kernel Modes of Operation At system boot time, the hardware starts in kernel mode. Also, on interrupts, the hardware switches to kernel mode. In general, whenever OS gains control, we are in kernel mode. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 62 / 407 Timer: Periodic Interrupts from OS For the OS to maintain control over the CPU we need protection against user program getting stuck in infinite loop or similar. Timer is set to interrupt the computer after a specified period. Period can be fixed or variable. OS sets up the timer before transferring control to user programs. When the timer interrupt occurs OS gets control and can decide whether to abort the program or let it run longer. Instructions that set up the timer are privileged instructions—hardware operations that can only be executed in kernel mode. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 63 / 407 Process Management What is a process A program is compiled and stored in the main memory, as a set of instructions. When the CPU is going through those instructions and executing them one by one, the program takes a form of a running process. Concept of processes is fundamental to OS resource management. Example of processes: a compiler that is compiling some code; a word processor that has a document open; a social media app open on a smartphone. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 64 / 407 Process Management Processes need resources: CPU, memory, I/O, files, initialization data. A program is not a process—it’s a passive entity. Processes instead are active entities. A single-threaded process has one program counter specifying the next instruction to execute. Sequential execution: CPU executes instructions one at a time, until the process terminates. Two processes can be associated to the same program, but are considered separate entities, separate execution sequences. Multithreaded processes have multiple program counters—we will address threads later in the module. Typically many processes exist, some belong to OS executing in kernel mode, some to user, executing in user mode: OS multiplexes between these processes on single or multicore CPUs. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 65 / 407 Process Management OS undertakes following activities in relation to processes: Creating and deleting processes. Scheduling processes and threads on the CPUs. Suspending or resuming processes. Provide process synchronization—we address this later in the module. Provide ways for process communication—also later in the module. We will get back to processes in Week 3. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 66 / 407 Memory Management Main memory is central to the operation of a modern computer system. Main memory can be very large, holding thousands to billions of bytes, each addressed separately. CPU and I/O devices can target those bytes (read/write). Apart from registers and caches, main memory is the only other memory directly accessible by the CPU. For CPU to access other data, such as in various disks, it has to be transferred to the RAM first. Data and instructions therefore first travel to the RAM. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 67 / 407 Memory Management: Program Execution 1 Load a program into the main memory and let CPU know the start address. 2 As program executes, instructions and data are accessed by addressing the main memory. 3 When a program terminates, space is freed and a new program may take its place in the main memory. 4 Several programs are usually in memory, which creates a need for memory management. OS memory management Keep track of used memory blocks and which process is using them. Allocate/deallocate memory space. Decide which processes to move in and out of memory. We will get back to this in Week 9. You can notice the complexity of work of OS is growing. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 68 / 407 Cache Management Caching Caching is a technique used to speed-up access to commonly read/written information—the core idea is to copy blocks of information from slower to faster memory and then access it from that faster memory. This state is temporary and caching is happening very frequently at all levels: hardware, OS, software. When we need a particular piece of data, we first check the cache. If found, we don’t go to slower memory. Otherwise we copy the data from the RAM to the cache—assume it will be needed again soon. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 69 / 407 Cache Management Cache is smaller than main memory. Cache replacement policy is an important consideration in OS and can increase performance significantly: what should we keep in cache and what should we move to memory? M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 70 / 407 Cache Coherence Cache coherence In multiprocessor environment, each processor may have a separate cache. If both contain the same memory copied from the RAM, and one of them updates it, what happens to the other? Cache coherency is needed to make sure the data is not outdated in one of the copies. In distributed systems problem severe: same data in different computers. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 71 / 407 Security and Protection OS protects hardware and memory resources from what can be considered unauthorized access by processes and users. Protection: mechanism for controlling access to certain resources defined by the computer system. Security: defense of the system against internal and external attacks: denial-of-service, worms, viruses, identity theft, theft of service,... Operating System Security Measures Keep track of user IDs; each user IDs has associated resources that they can access; group ID allow set of users to be assigned permissions. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 72 / 407 Virtualization Run another OS within the main OS, run applications on that guest OS. Emulation: guest OS compiled for a different hardware which has to be emulated to run on the hardware we have on our desk. Virtualization: guest OS compiled for our hardware, and run natively, using that CPUs instruction set. This is faster than emulation, but limited. XV6 In the labs we are emulating RISC-V architecture and running xv6 by translating RISC-V instructions that xv6 uses to Intel/AMD instructions which the lab computers understand. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 73 / 407 Virtualization processes processes processes processes programming kernel kernel kernel interface VM1 VM2 VM3 kernel virtual machine manager hardware hardware (a) (b) M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 74 / 407 Kernel Data Structures We finish this week with a look (a reminder?) of the various data structures that are used in the kernel to store and manage data. Singly linked list: data data data null Doubly linked list: data null data data data null M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 75 / 407 Kernel Data Structures Circularly linked list: data data data data Lists of size n require at most n checks to find an item. A stack is a common data structure in OS: last-in first-out (LIFO) structure which pushes things at the top and pops them from the top of the list. Interrupt routines push registers and pop them back to restore the previous state of the CPU. Queue similarly uses first-in first-out idea (FIFO). M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 76 / 407 Kernel Data Structures Trees introduce hierarchy: parent-child structure between data points. 17 12 35 6 14 32 40 Tree search complexity Unbalanced tree of n nodes can require up to n comparisons to find the data. Balanced tree can improve this by requiring log(n) (height of left and right subtrees differ by at most 1). M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 77 / 407 Kernel Data Structures Hash maps can allow a search cost of at most 1. hash_function(key) hash map 0 1.. n value Ideally, each unique key is mapped to unique value, which can be used as an index to a table containing data we want. Hash collisions can occur: different keys map to the same value. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 78 / 407 COMP2211: Progress Week Topic 1 Introduction to OS 2 OS services 3 Processes 4 Xv6: Live coding and Q&A from the xv6 book 5 Reading week. No scheduled labs or lectures 6 Threads and concurrency. Assignment due this week. 7 Scheduling 8 Process synchronisation 9 Memory management 10 Catch up 11 Module review M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 79 / 407 Progress Week Topic 1 Introduction to OS 2 (current) OS services 3 Processes 4 Xv6: Live coding and Q&A from the xv6 book 5 Reading week. No scheduled labs or lectures 6 Threads and concurrency. Assignment due this week. 7 Scheduling 8 Process synchronisation 9 Memory management 10 Catch up 11 Module review M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 80 / 407 Structure of COMP2211: Reading List We will be mainly using the Operating System Concepts (OSC) 10th ed., 2018, and the 3rd edition of the xv6 book (XV6), 2022. These slides are based on OSC (see the reference list). Week Reading materials 1 Chapter 1 OSC. Chapter 1 XV6. 2 (current) Chapter 2 OSC. Chapter 2 XV6. 3 Chapter 3 OSC. 4 Reread Chapters 1–2 XV6. 5 Reread Chapters 1–3 OSC. 6 Chapter 4 OSC. 7 Chapter 5 OSC. 8 Chapter 6 OSC. 9 Chapters 9–10 OSC. Chapter 3 XV6. 10 Reread Chapters 4–6 OSC. 11 Reread Chapters 9–10 OSC. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 81 / 407 Objectives Identify services that OS provides. Discuss system calls. Compare monolithic, layered, microkernel, modular and hybrid approach to OS design. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 82 / 407 Part I: Description of the Problem M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 83 / 407 Operating-System Services M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 84 / 407 Operating-System Services Set of features helpful most directly to the user: User interface (UI): graphical, touch-screen, command-line interface,... Program execution: load programs from memory and run them; end execution. I/O operations: access data from files or I/O devices. For efficiency and protection, users cannot do so directly: OS services do that for us. File-system manipulation: read, write, create, delete, search files. Access permission control. Communications: Communicate between processes. Shared memory or message passing communication. Error detection: Errors occur in CPU, memory, I/O devices, user programs: OS has to detect, correct, report errors. Sometimes may decide to halt the system. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 85 / 407 Operating-System Services Other features are mainly for the operation of the system: Resource allocation: Multiple processes are running on the system and they should be allocated resources: CPU cycles, main memory, file storage, I/O device access. Logging: Keep track of which programs what resources. Protection and security: Protect information between different users on the system. Make sure processes do not interfere. Control resource access. Security from outside: control access to the system. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 86 / 407 User and OS Interface: Command Interpreters Command Interpreters On UNIX and Linux systems multiple options: C shell, Bourne-Again shell, Korn shell. The main function is to execute user supplied commands, which usually modify files on the system. The commands can be built into the shell or they can be a separately stored executable which the shell can invoke. The latter requires no modification to the shell when adding new commands. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 87 / 407 User and OS Interface: Command Interpreters M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 88 / 407 User and OS Interface: Graphical User Interface Instead of entering commands directly, we could use a Graphical User Interface—a mouse-based window-and-menu interface. Users move mouse and click on images that represent files, executables, directories, to interact with them. First GUI appeared in 1973. Apple made GUI (desktop) widespread in the 1980s. On UNIX systems traditionally command line dominated, but open-source GUIs exist: KDE, GNOME,... Touchscreen is a form of GUI common on mobile devices. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 89 / 407 User and OS Interface: When is Command Line Interface better? Command-line is usually faster, but requires specialized knowledge. System administrators for example would choose command line over a GUI for most tasks. Not everything is available in GUI—specialized commands only accessed through CLI. Easier to do repetitive tasks—commands can be recorded in a file and easily rerun (shell scripts). M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 90 / 407 Vevox quiz M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 91 / 407 Part II: System Calls M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 92 / 407 System Calls System calls: a well-defined interface to the services of an operating system, used by programmers and users. Usually written in C or C++, but assembly also used. Consider an example task of reading a file and writing its contents to another file. As a UNIX command it may look like this: cp in.txt out.txt M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 93 / 407 System Calls: an Example cp in.txt out.txt In this simple task, many OS services are employed: Entering the command, or moving a mouse to select files, causes sequence of I/O system calls. Then, files need to be opened: another set of system calls. Errors need to be detected: input file not existent, output file already exists with the same name. Can ask user if they want to replace the output file—requires set of system calls. When both files are open, we loop by reading bytes from one to another (system calls). Each read must return possible error conditions: end-of-file, hardware failure to read,... Once done, files should be closed (system calls). M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 94 / 407 System Calls: an Example M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 95 / 407 Application Programming Interface (API) Most programmers will not see this level of complexity of numerous OS services being in use. APIs hide this away behind a set of standard functions which are made available to programmers, for performing common tasks when developing applications. Input and output parameters are specified for each API function. Common APIs: Windows API. POSIX API (UNIX, Linux, macOS). Java API (Applications based on the Java Virtual Machine). APIs provide code portability and eases the task of using OS services. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 96 / 407 Application Programming Interface (API) M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 97 / 407 Application Programming Interface (API) System call interface This is an abstraction that allows programmer not to think about the details of system calls being used in API functions. Only need to obey the API and understand the effects of calling it. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 98 / 407 System Calls: Parameter Passing System calls require various information, for example, files, devices, addresses in memory, lengths of byte streams,... Three methods to pass parameters to OS: Through registers. Store in a table and the address to it is passed through a register. Pushed to a stack by a program and popped off the stack by OS. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 99 / 407 System Calls: Parameter Passing M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 100 / 407 System Calls: Types We can roughly group system calls into six categories: 1 Process control 2 File management 3 Device management 4 Information maintenance 5 Communications 6 Protection Next we discuss each of these categories. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 101 / 407 System Calls: Process Control Running program needs to halt execution. If the termination is abnormal, some log files are usually generated. Debugger may use those logs to aid programmer in fixing problems. Bugs are usually discovered this way in the code. When a process is running, it may want to load and execute other programs. Create, terminate, duplicate, wait for processes. Get information about a process. Where data is shared among processes, locking is provided to assure no clashes. We will go into detail in later weeks. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 102 / 407 System Calls: File Management Common system calls that deal with files: Create and delete files. Open files for reading and writing. Similar operations are required for directories. Determine and set attributes: file name, type, protection codes,... M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 103 / 407 System Calls: Device Management Processes may need resources to execute: main memory (RAM), disk drives, access to files,... Resources available can be granted, but usually processes will have to wait for them. We can think of resources as devices: physical or virtual. OS provides systems calls for interacting with these: Request and release a device. Similar to open and close system calls for files. Once we have the device allocated to us, we can read and write. File handling and general device handling is so similar that UNIX merge the two. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 104 / 407 System Calls: Information Maintenance There are system calls for transferring information between OS and user programs: Time and date calls. Version of OS. Amount of free memory or disk space. Memory dump also goes into this category. Other debugging info usually provided: single step, runtime profiling, program counter recording, various information about processes. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 105 / 407 System Calls: Communication Processes need to communicate, and there are two main methods: message-passing model and shared-memory model. Message-Passing Process Communication Model Processes exchange messages with one another to transfer information. Before communication takes places, connection must be opened. Computer host name and process name are used to identify the possibly remote parties for communication. System calls to establish or abort communication are available. Other system calls to receive and send messages are also available. Shared-memory Model Processes use system calls to create and gain access to regions of memory owned by other processes. Normally, OS prevents process from accessing memory allocated to other processes. In shared-memory model processes have to agree to remove this obstruction. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 106 / 407 System Calls: Protection OS should provide services for protecting computer system resources. Traditionally this was to protect one user from another on an instance of some OS. With Internet all systems started to get concerned about protection. System calls include setting permission on files and disks. Allow/deny access (for particular users). M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 107 / 407 System Calls: Example System Calls on Windows and UNIX M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 108 / 407 System Calls: Example System Calls on Windows and UNIX M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 109 / 407 System Services OS System Services This is separate from system calls within the OS. System services are sitting between the OS and the Application Programs. Also called system utilities. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 110 / 407 System Services Some system services are interfaces to system calls, but some are more complex. Examples: File management: create, delete, copy, rename, print, list files/directories. Status information. Can be simple: time, date, memory space, users. Could be more complex things about performance or debugging. File modification: text editors, text searching utilities, text transformation. Programming language support: compilers, assemblers, debuggers. interpreters. Program loading and execution. Application programs supplied with OS are usually higher level tools that utilize many system services: browsers, word processors and text formatters, spreadsheets. Most users view OS through application programs and system services, not system calls. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 111 / 407 Vevox quiz M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 112 / 407 Part III: Code Compilation and Loading M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 113 / 407 Linkers and Loaders Usually a program resides on a disk as a binary executable file. To run it, the executable has to be copied to memory and placed in the context of some process. Relocatable object file: source code compiled into object files suitable to be moved into a particular memory location. The linker combines these objects into a binary executable. Linker may include standard libraries, such as math.h in C. A loader loads the executable into memory to be run on CPU. Relocation assigns final addresses to various parts of the executable after it is placed in memory. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 114 / 407 Linkers and Loaders M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 115 / 407 Linkers and Loaders Consider running./main on command line. The shell first creates a new process using the fork() system call. The shell then invokes the loader with exec() passing it the name of the executable: main. The loader loads the program into main memory using the address space of the new process. Similar process occurs in GUI by double clicking the executable with the mouse. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 116 / 407 Linkers and Loaders: Dynamic Linking In the above we assume that libraries are linked into the executable and then loaded into memory together with the rest of the program code—even if the code will end up not calling those libraries. Dynamic Linking Link libraries dynamically when the program is being loaded into memory. Avoid linking and loading libraries that will end up not being used in the program. Instead the library is loaded when, and if, it is required during run time. Possible memory space improvements. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 117 / 407 Linkers and Loaders: ELF format Object files and executables typically have a standard format. It holds machine code and various metadata about functions and variables in the program. Unix and Linux use the ELF format. One data point of interest is an entry point—address of the instruction to execute upon the start of the program. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 118 / 407 Why Applications are OS Specific Applications compiled for one system (OS-hardware combination) usually will not work on a different system. Each OS has unique system calls. Possible solutions: 1 Use interpreted languages like Python, Ruby: interpreter on each system goes through the source code and executes correct instructions and system calls. Interpreter can be limited. 2 Use language like Java that runs on Java Virtual Machine (JVM): virtual machine is ported to different systems and programmers use the universal interface of the JVM rather than the specific OS. 3 Compile code (such as C) for every different configuration. In general this is still a difficult problem and there is no ultimate solution. Porting is required. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 119 / 407 Part IV: Design and Structure of Operating Systems M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 120 / 407 OS Design and Implementation Design of an operating system is a major undertaking and there is no complete solution that could generate an OS automatically given requirements. Internal structure can vary widely, based on the purpose of OS. User goals and system goals first are outlined. User: OS easy to learn and use, reliable, fast, safe. System: easy to design, implement, maintain; efficient, reliable, error free. There can be many interpretations of these vague requirements General principles are known (we are learning them in this module), but designing one is a creative task that relies on many human decisions and software engineering. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 121 / 407 OS Design and Implementation: Policy and Mechanism Separation of policies (what) and mechanisms (how) is an important concept. Example policy: interrupt OS regularly; Mechanism: timer interrupts. Good approach as we can change policies later and mechanisms are in place: for example, change the timer interrupt frequency. Linux example The standard Linux kernel has a specific CPU scheduler—but we can change it to support a different policy in how we schedule different jobs on the system. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 122 / 407 OS Design and Implementation: Languages OS is a collection of many programs, implemented by many people over years—general statements hard to make but there are some common points. Kernel: assembly, C. Higher level routines: C, C++, other. For example, Android is mostly C and some assembly. Android system libraries C or C++. Android APIs: Java. Advantages of High Level Languages Code written faster, more compact, easier to understand and maintain. Compiler improvements easy to integrate by recompiling the OS. Easier to port the whole OS to new architectures. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 123 / 407 Vevox quiz M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 124 / 407 OS Structure Monolithic kernel: Place all the functions of the kernel into a single, static binary file that runs a single address space. Not much structure or no structure at all. Original UNIX system used this approach: it had a kernel and the system programs. It has evolved over the years with some structure. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 125 / 407 OS Structure: Traditional UNIX system M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 126 / 407 OS Structure Monolithic kernels are simple in concept, but are difficult to implement and extend as everything is in one big kernel rather than structured. They have performance advantage, which explain why they are still relevant. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 127 / 407 OS Structure: Linux system M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 128 / 407 OS Structure Monolithic kernels are said to be tightly coupled because changes in the system can affect all other parts. We can instead take a loosely coupled approach where the kernel is structured into parts doing specific and limited functions. Layered system: highest layer is user interface, while lowest layer is hardware. Layers can only call functions from the layer below. Debugging is easier in this—debug first layer without affecting the rest of the system, once done, move up the layer. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 129 / 407 OS Structure: Layered approach M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 130 / 407 OS Structure Kernels can be modularized using the microkernel approach. Remove all nonessential components from the kernel and implement them as user level programs—this results in a small kernel. When the operating system needs to be extended, new services are added in user space rather than modifying the kernel. Kernel modifications require fewer changes since it is small. It is also easier to port to another OS and provides more security since most services run in user mode. Performance suffers compared to one big kernel—different parts have to communicate. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 131 / 407 OS Structure: Microkernel M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 132 / 407 Vevox Quiz M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 133 / 407 COMP2211: Progress Week Topic 1 Introduction to OS 2 OS services 3 Processes 4 Xv6: Live coding and Q&A from the xv6 book 5 Reading week. No scheduled labs or lectures 6 Threads and concurrency. Assignment due this week. 7 Scheduling 8 Process synchronisation 9 Memory management 10 Catch up 11 Module review M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 134 / 407 Progress Week Topic 1 Introduction to OS 2 OS services 3 current Processes 4 Xv6: Live coding and Q&A from the xv6 book 5 Reading week. No scheduled labs or lectures 6 Threads and concurrency. Assignment due this week. 7 Scheduling 8 Process synchronisation 9 Memory management 10 Catch up 11 Module review M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 135 / 407 Reading List We will be mainly using the Operating System Concepts (OSC) 10th ed., 2018, and the 4th edition of the xv6 book (XV6), 2024. These slides are based on OSC (see the reference list). Week Reading materials 1 Chapter 1 OSC. Chapter 1 XV6. 2 Chapter 2 OSC. Chapter 2 XV6. 3 (current) Chapter 3 OSC. 4 Reread Chapters 1–2 XV6. 5 Reread Chapters 1–3 OSC. 6 Chapter 4 OSC. CW deadline 7 Chapter 5 OSC. 8 Chapters 6–8 OSC. 9 Chapters 9–10 OSC. Chapter 3 XV6. 10 Reread Chapters 4–6 OSC. 11 Reread Chapters 9–10 OSC. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 136 / 407 Objectives Study the concept of processes. OS representation and scheduling of processes. Creation and termination of processes. Study the methods for interprocess communication. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 137 / 407 Part I: Introduction to Processes M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 138 / 407 What is a Process? Early computers: only one program executed at a time. One program had complete control over resources. Today: multiple programs in memory, all executed through multitasking. This evolution required compartmentalization of various programs. Process Process is a program in execution. One program invoked multiple times results in multiple processes. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 139 / 407 A Concept of Processes Early computers were batch systems: execute jobs submitted by users. Minimum interaction. This was followed by time-shared systems: user programs or tasks. Potentially interacting. Even one user can run several tasks: browser, email, editor. Jobs Calling running programs jobs has historical significance, as most of the OS concepts were developed around job processing. You may see the term used to this day, but process is a modern replacement M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 140 / 407 A Concept of Processes The status of the current activity of some process is represented by Current value of the program counter: where are we in the execution of the binary? Contents of the CPU registers: what data are we working on? M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 141 / 407 A Concept of Processes: Memory Layout Each process has its own memory layout: Text section: executable code. Data: global variables. Heap section: Memory that grows and shrinks dynamically during execution. Stack: Structure for temporary values (function parameters, return addresses, and local variables). Text and data sections are fixed size. Heap and stack change size. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 142 / 407 A Concept of Processes: Stack Stack contains potentially a small amount of data which is pushed and popped using specific CPU instructions. For example, when a function is called, input arguments, local variables, and the return address are pushed onto the stack. Upon completing the function, data is popped from the stack: the last one is usually the return address to get back to caller. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 143 / 407 A Concept of Processes: xv6 stack M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 144 / 407 A Concept of Processes: Heap The heap can be grown dynamically. In C malloc and free do that for us. Usually heap and stack grow toward each other—overlap watched by OS. On the right is the xv6 process memory, which is slightly different. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 145 / 407 A Concept of Processes: Memory Layout Example M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 146 / 407 Vevox Quiz M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 147 / 407 Part II: Scheduling of Processes M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 148 / 407 Process State Transitions Processes change states during execution. New: Process has been created. Running: CPU is reading process’ instructions. Waiting: Waiting for an event (for example, I/O completion). Ready Terminated Note that only one process can be running on a core. Others may be ready or waiting. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 149 / 407 Process Control Block (PCB) OS keeps track of processes using a process control block (PCB). Process state Program counter (PC) CPU registers: along with the PC, these have to be saved when process is interrupted. CPU-scheduling information: priority and other scheduling parameters. Memory-management information: location of various memories assigned to the process (Week 9). Accounting information: resource utilization statistics. I/O status information: I/O devices allocated to the process, open files,... M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 150 / 407 Process Control Block (PCB): Linux Example M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 151 / 407 Processes or Threads? You may have heard of threads more rather than processes. Common term in parallel programming frameworks, such as GPU programming. Here we implied that a process performs a single thread of execution. A single thread of instructions being read in sequence by CPU. Multi-threading Nowadays Operating Systems extend processes to be able to perform multiple tasks at once—for example, two cores executing at different locations in the binary of a process. PCB and other parts of OS have to be expanded. See Week 6 for detail. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 152 / 407 Process Scheduling The objective of multi-programming is to maximize CPU utilization. Time-sharing (or multitasking) adds another requirement—the switching between processes should be frequent enough for users to interact with the running programs. Process Scheduler Integral part of operating systems which meets the constraints posed by time-sharing and multi-tasking by selecting a process to run from a set of available processes. Multiprocessing Each CPU core can run one process at a time; N CPUs can run N processes. If more processes than cores are created, some will have to wait. Degree of multiprogramming defines the number of processes currently in memory. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 153 / 407 Process Scheduling Two types of processes: I/O bound: most time spent waiting for memory. CPU bound: most time spent in execution. The types of processes going through the system will affect the objectives of multiprogramming and time-sharing. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 154 / 407 Process Scheduling: Queues Processes enter the system and are put into a ready queue. Queue is usually a linked list, where each PCB links to the next. There may be other queues, for example wait queue for processes waiting I/O. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 155 / 407 Process Scheduling: Queues M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 156 / 407 Process Scheduling: Queuing Diagram M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 157 / 407 Process Scheduling: CPU Scheduler Role of a scheduler: from a set of ready select one and run on CPU. Scheduler is working frequently; I/O bounds processes may execute for a few milliseconds before waiting for I/O. CPU-bound processes may require CPU for extended durations, but scheduler unlikely to grant it. Typically designed to switch processes very frequently (less than every 100 milliseconds). Memory Swapping This technique may decide to move a process from memory to disk, reduce the degree of multiprogramming and thus reduce the active contention for the CPU. Later the process can be returned to memory and continued where it left off (state save/restore required). M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 158 / 407 Process Scheduling: Context Switching Remember interrupts cause CPU to pause current task and do some kernel routine. CPU needs to save the current context of a process and later restore it for continuing running it. Context is represented in the PCB of a process. Register contents, state of the process, memory management information. Context Switch Perform a state save of the current process and a state restore of a different process. Context switch is pure overhead as CPU is not executing any process instructions. Typical speed: several microseconds. Depends on size of state needed to save/restore. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 159 / 407 Process Scheduling: Context Switching M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 160 / 407 Vevox Quiz M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 161 / 407 Part III: Process Manipulation M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 162 / 407 Operations on Processes: Creation Processes may create several new processes during execution. Creating process is called a parent process and the new processes are called children processes. New processes can in turn create more—this forms a tree of processes. Process identifier (pid) is usually used to identify each process. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 163 / 407 Operations on Processes: Creation Linux example systemd created on boot; it starts the processes for various services. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 164 / 407 Operations on Processes: Creation Options for resources for the child processes: Obtain directly from OS. Share a subset of resources from a parent. Restricting child process to a subset of the parent’s resources avoids overloading the system through creation of many child processes. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 165 / 407 Operations on Processes: Creation When a process creates another process, two possibilities: Parent and child execute concurrently (not necessarily in parallel). Parent waits until some or all of its children terminate. Address space also has two possibilities: Parent and child have the same program and data (xv6 fork). The child has a new program loaded into it (xv6 fork and then exec). M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 166 / 407 Operations on Processes: Creation In UNIX, a new process is created by fork(): New process has a copy of address space of the original process. Both processes continue execution at the instruction after the fork. fork() returns zero in the child and PID of the child in the parent. After the fork() usually exec() is called: Process’ memory space is replaced by a new program. Load a binary file into memory. Destroy the memory image of the program containing exec system call. Parent can then create more children, or wait until termination of current ones. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 167 / 407 Operations on Processes: Creation M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 168 / 407 Operations on Processes: Creation in UNIX with C M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 169 / 407 Operations on Processes: Termination Process terminates when it asks OS to delete it using exit() system call. All the resources: physical and virtual memory, open files, I/O buffers are reclaimed by the OS. A parent may forcibly terminate its created processes: If the child process has exceeded its usage of some allocated resources. The job that the child is doing is no longer required. The parent is exiting and it is required to terminate the sub-tree of processes before exiting (cascading termination). Zombie processes Parents may call wait() to wait for their children to terminate. The processes that have terminated but whose parents have not yet called wait() are called zombie processes—we have to keep them in the system to return the status to the parent eventually. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 170 / 407 Part IV: Communicating Processes M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 171 / 407 Interprocess Communication Active processes on the system can either be Independent or cooperating. A process is independent if it does not share any data while executing. A process is cooperating if it can affect or be affected by other processes. Process cooperation useful in a few scenarios: Information sharing: for example, copy-paste between programs. Computation speedup: split big tasks into multiple subtasks. Modularity: The system may be designed to have separate processes or threads working cooperatively to achieve some function. When processes cooperate, they require interprocess communication (IPC). M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 172 / 407 Interprocess Communication Two fundamental concepts: Shared-memory model: agree a region of memory to share among cooperating processes. Read and write there to exchange info. Message-passing model: use a message-passing protocol to send and receive information. Both are implemented in operating systems. Message-passing model is useful when no conflict resolution is desired. However, it is slower, since each read/write requires kernel ops. With share-memory model, conflicts and race conditions may appear (two processes write). Message-passing is required to communicate between different systems that do not share memory. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 173 / 407 Interprocess Communication M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 174 / 407 Interprocess Communication: Pipes Pipes were one of the earliest UNIX mechanisms for interprocess communication. An example of shared-memory model of communication. Four key design considerations: Bidirectional or unidirectional communication? If bidirectional, can data travel both directions at once? Do we need a relationship (parent-child) between communicating processes? Can we use pipes over network or locally only? M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 175 / 407 Interprocess Communication: Ordinary Pipes Produced-consumer model: producer writes to the write end of the pipe while the consumer reads from the read end. Unidirectional: we need two pipes for communicating back to the producer. In UNIX this is constructed using pipe(int p[]) where p is the read end of the pipe and p is the write end. p and p are special types of files in UNIX, thefore fork() in the parent will make the child inherit these. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 176 / 407 Interprocess Communication: Named Pipes Ordinary pipes provide a simple mechanism for processes to communicate, but they only exist until processes exist and communicate. When they terminate, the pipe disappears. Named pipes (FIFOs) in UNIX provide extra functionality: Bidirectional communication. No parent-child relationship needed. Several processes can use the pipe for communication. Pipe remains active after communicating processes terminate. Named pipes are bidirectional, but provide only half-duplex transmission (only one direction at a time). M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 177 / 407 COMP2211: Progress Week Topic 1 Introduction to OS 2 OS services 3 Processes 4 Xv6: Live coding and Q&A from the xv6 book 5 Reading week. No scheduled labs or lectures 6 Threads and concurrency. Assignment due this week. 7 Scheduling 8 Process synchronisation 9 Memory management 10 Catch up 11 Module review M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 178 / 407 Progress Week Topic 1 Introduction to OS 2 OS services 3 Processes 4 Xv6: Live coding and Q&A from the xv6 book 5 Reading week. 6 (current) Threads and concurrency. Assignment due this week. 7 Scheduling 8 Process synchronisation 9 Memory management 10 Catch up 11 Module review M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 179 / 407 Reading List We will be mainly using the Operating System Concepts (OSC) 10th ed., 2018, and the 4th edition of the xv6 book (XV6), 2024. These slides are based on OSC (see the reference list). Week Reading materials 1 Chapter 1 OSC. Chapter 1 XV6. 2 Chapter 2 OSC. Chapter 2 XV6. 3 Chapter 3 OSC. 4 Reread Chapters 1–2 XV6. 5 Reread Chapters 1–3 OSC. 6 (current) Chapter 4 OSC. 7 Chapter 5 OSC. 8 Chapters 6–8 OSC. 9 Chapters 9–10 OSC. Chapter 3 XV6. 10 Reread Chapters 4–6 OSC. 11 Reread Chapters 9–10 OSC. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 180 / 407 Objectives Discuss the motivation, benefits, and challenges in designing multithreaded processes. Talk about the basic components of a thread. Describe mechanisms for threading. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 181 / 407 Vevox big quiz (15–20 minutes) M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 182 / 407 Part I: The Concept of Threads M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 183 / 407 Threads: Introduction In week 3 we have explored the concept of processes, which assumes that it is a running program that has a single thread of control. For interest, xv6 supports only single-threaded processes. See p. 29 of the xv6 book. In modern computing most operating systems provide capabilities for processes to have multiple threads of control. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 184 / 407 Threads: Introduction What is a Thread? A basic unit of CPU utilization; it comprises a thread ID, a program counter (PC), a register set, and a stack. Same threads within a process share: code section, data section, and other resources (for example open files). A traditional process has a single thread of control. Processes with multiple threads can perform more than one task at a time. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 185 / 407 Single and Multithreaded Processes M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 186 / 407 Multithreading Examples: Application creating photo thumbnails from a collection of images uses a different thread for each image. A web browser displays images or text in one thread and retrieves network data in another. Word processor has a thread to display UI, a thread to respond to keystrokes, and a thread for spellchecking. Multicore Systems and Threads Applications can be designed for threads to run in parallel on multicore systems. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 187 / 407 Example: Multithreading in a Web Server In some situations, applications may be required to perform several similar tasks. For example, a web server accepts client requests for web pages, images, sound. Several users may request access at the same time. Running a single thread, the web server would hold other users, potentially for prolonged periods. One approach: tree of processes, with the root being the server and children being user requests—time consuming process creation and resource use significant. Since similar tasks are performed on requests, multithreading is a more efficient solution. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 188 / 407 Example: Multithreading in a Web Server M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 189 / 407 Multithreading Other motivating aspects: Most OS kernels are multithreaded; for example, during Linux boot time, threads are created for managing devices, memory management, or interrupt handling. Various applications that parallelize well can take advantage of multithreading: sorting, tree algorithms, graph algorithms. Data mining, graphics, artificial intelligence: people aim to design algorithms to exploit multicore architectures. Sometimes problems are embarrassingly parallel, without data dependencies, such as adding two vectors together. These problems can be easily solved across many cores. Sign up for COMP3221 next year to get into the details. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 190 / 407 Multithreading: Benefits Four major categories: 1 Responsiveness: if one part of an application blocks, other threads can continue working. 2 Resource sharing: threads share memory resources of a process to which they belong. 3 Economy: allocating resources when creating processes is costly; context switch is also costly. Threads are cheaper in both aspects. 4 Scalability: Multi-threaded processes can exploit multiple cores, whereas a single-threaded ones can only run on one core. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 191 / 407 Part II: Introduction to Parallel Computation M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 192 / 407 Multicore Programming Recall that a multicore environment is an environment in which single processing chip contains multiple computing cores. The communication within cores on the same chip is very fast. Multithreaded programming provides a mechanism for an efficient use of multicores. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 193 / 407 Multicore Programming: Concurrency or Parallelism? Concurrency (top) and parallelism (bottom) refer to different concepts. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 194 / 407 Multicore Programming: Concurrency or Parallelism? On single core, concurrency means interleaving the execution of threads in time. On a multicore system, concurrency means that some threads can execute simultaneously, which means there is parallelism. We can have concurrency without parallelism. Historical perspective Before multi-processor/core architectures became prevalent, most systems had a single processor and operating systems were designed to provide illusion of parallelism by rapidly switching processes. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 195 / 407 Multicore Programming: Key Challenges in Designing Programs System designers and application programmers are pressured to make better use of multiple cores—this is ongoing. The systems are growing in size, both at large (warehouse computers) and small (laptops, phones) scale. Operating Systems have to accommodate multicore hardware. Old single-threaded applications have to be ported. New algorithms have to be developed from scratch (see the whole topic of parallel algorithms). M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 196 / 407 Multicore Programming: Key Challenges in Designing Programs For example, see the TOP500 list: https://top500.org/lists/top500/list/2024/06/. ∼ 9 million cores. Photo source: Wikipedia. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 197 / 407 Multicore Programming: Key Challenges in Designing Programs 1 Identifying tasks: examine applications and find workloads that can be divided, ideally into independent tasks. 2 Balance: make sure parallel tasks perform similar amounts of work. 3 Data splitting: The data accessed and manipulated by parallel tasks have to be divided. 4 Data dependency: Do tasks depend on output data from other tasks; synchronization may be required. 5 Testing and debugging: Program running on N cores has many execution paths. Debugging more difficult than in a single-threaded case. Of interest Many people believe that an entirely new software design approach will be needed in the future. Computer Science educators often talk about teaching software development through increased emphasis on parallel programming. Question: how many of you have done some parallel code development/debugging/reading? M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 198 / 407 Multicore Programming: Types of Parallelism Top: data parallelism; bottom: task parallelism. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 199 / 407 Multicore Programming: Types of Parallelism Data parallelism: distribute data across N cores, each to receive a subset of the whole data. Example: sum a vector of size K. Single-core would get elements from 0 to K − 1 and sum them in series. On N = 2 core system, core 1 would get elements 0 to K /2 − 1 and core 2 would get elements K /2 to K − 1. Task parallelism: distribute different tasks (operations) across multiple cores. Each task may require part or the whole of the input data. In practice you may likely see a hybrid approach. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 200 / 407 Multicore Programming: Amdahl’s Law What speedup can we expect when we add additional cores to an application that contain both serial and parallel parts? 1 speedup ≤ S + 1−S N where N is the number of cores, S is a portion of the application that must be performed serially. Example: S = 0.25 (25% of the application is serial and 75% is parallel). If N = 2 (2 cores), the speedup is no greater than 1.6×. If we now set N = 4 (4 cores), the upper bound on the speedup is 2.28× (not 4×!). M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 201 / 407 Multicore Programming: Amdahl’s Law M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 202 / 407 Part III: Libraries for Multithreading M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 203 / 407 Multithreading Models Support may be provided for threads at user level and kernel level: user threads and kernel threads. User threads work in user mode and kernel threads require direct OS support. What is the relationship between user and kernel threads? M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 204 / 407 Multithreading Models: Many-to-One Disadvantage: entire process blocks if one thread calls a blocking system call. Multiple threads are unable to run in parallel → very few systems implement this. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 205 / 407 Multithreading Models: One-to-One Advantages: another thread can run when one calls a blocking system call. Parallel processing doable. Disadvantage: Each user thread requires creating a kernel thread. Performance may suffer. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 206 / 407 Multithreading Models: Many-to-Many Advantage: Number of kernel threads customizable according to an application or machine requirements. Number limited; does not depend on how many user threads. Kernel can run threads in parallel. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 207 / 407 Multithreading Models: two-level-threads Combined approach which allows specific user threads to be assigned a kernel thread, but still multiplex between other user and kernel threads. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 208 / 407 Thread Libraries A thread library provides an API for managing threads. Two approaches: entirely in user or in kernel modes. Three main libraries: POSIX Pthreads, Windows thread library, and Java thread API. For Pthreads and Windows data declared globally is shared among threads. Synchronous and asynchronous threading In the asynchronous threading parent continues working concurrently with any children threads created. In the synchronous threading parent waits for children to complete: for example, parent may combine the results from children. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 209 / 407 Thread Libraries: Pthreads Pthreads is a standard API for thread creation and synchronization. Various IEEE standards address it. It is a specification not an implementation. OS designers can implement the specification their own way. The C program on the right is a standard way to use pthreads. runner executes in a separate thread from main. sum is shared between both threads. Look into Windows and Java threading. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 210 / 407 Implicit Threading Designing parallel programs manually is potentially a cumbersome task. To better support the design of concurrent and parallel applications is to automate the identification and creation of threads. Offload this work from developers to compilers: implicit threading. Advantage of implicit threading Developer identifies tasks that can run in parallel. The environment determines the low-level details of thread creation and management. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 211 / 407 Implicit Threading: Thread Pools Manual thread creation has two problems: Thread creation may be costly; threads are discarded once finished. Number of threads is unbounded: may exhaust system resources. Thread pools Create a number of threads at startup and make them available for doing work. The application requests resources from the thread pool: if there is a thread available, it is allocated; otherwise wait until a thread is placed back in the pool. Several advantages of thread pools: Servicing a request with existent thread may be faster than creating and deleting threads. Thread pool limits the number of threads. Thread pool can be configured based on available resources, or adjusted dynamically based on what the applications are doing. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 212 / 407 Implicit Threading: Thread Pools Fork-join model works for implicit threading as well: A library manages threads and assignment of tasks to threads. Threads are not created directly during fork. Parallel tasks are identified and designated to threads. Join mechanism provides the synchronicity. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 213 / 407 Implicit Threading: Example Libraries Fork-join library available in the Java API. OpenMP is a way to augment C, C++, Fortran programs to identify what can be parallelized. Grand Central Dispatch is Apple technology for implicit threading. Intel Thread Building Blocks supports designing parallel C++ programs. CUDA allows to program NVIDIA Graphic Cards to perform massively parallel computations. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 214 / 407 Implicit Threading: OpenMP example in C M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 215 / 407 Implicit Threading: OpenMP example in C Take two arrays a and b of size N. We want to add the array and produce a new array c. This is an embarrassingly parallel problem. OpenMP pragma will divide the work among the threads it creates. Different parts of the vectors will be added in parallel. Other OpenMP features Developers can choose several levels of parallelism. For example, set the number of threads manually. It also allows to say whether data is shared or private among threads. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 216 / 407 Activity: Discuss with Peers (5 minutes) Assume that the number of available cores is 4. Take the length of the array N = 40. Questions: 1 What is the maximum number of for loop iterations that will be executed in parallel? 2 How many iterations will each core run, assuming each iteration runs in the same amount of time and all cores start at the same time? M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 217 / 407 Part IV: Threading Issues M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 218 / 407 Threading Issues: fork and exec calls We have seen fork() and exec() in a small UNIX system: create a child and load and execute a binary. This becomes difficult in a multithreaded environment: does fork duplicate the thread or the whole set of threads within a process? Different fork functions may be provided to achieve either. The exec call typically works as usual: entire process is replaced by the specified program. Which fork to use depends on the application. M. Mikaitis (Leeds) COMP2211 Operating Systems December 2024 219 / 407 Threading Issues: Signal Handling Signalling in UNIX is used to inform processes of events. Occurrence of some event generates a signal. The signal is delivered to a process. Process handles the signal. Example events: division by zero, illegal memory access, CTRL-C key combination. Which thread should a signal be delivered to? Various methods exist,