Operating Systems - Chapter 1 - PDF
Document Details
Uploaded by HandyTerbium2533
Dalhousie University
Tags
Related
Summary
This document is Chapter 1 of a course on Operating Systems. It introduces the concepts of operating systems, computer architecture, and multiprocessor systems. It covers topics such as the different operating system components, and the distinction between computer architecture and computer organization within a computer system.
Full Transcript
Chapter 1 Introduction Dalhousie University CSCI 3120: Operating Systems Chapter 1: Introduction What Operating Systems Do Computer Architecture Computer Organization Operating System Operations Resource Management Free and Open-Source Operati...
Chapter 1 Introduction Dalhousie University CSCI 3120: Operating Systems Chapter 1: Introduction What Operating Systems Do Computer Architecture Computer Organization Operating System Operations Resource Management Free and Open-Source Operating Systems 2 What Operating Systems Do A computer system can be divided into four components: Hardware – provides basic computing resources 4 CPU, memory, I/O devices Operating system 4 Controls and coordinates use of hardware among various applications and users Application programs – define the ways in which the system resources are used to solve the computing problems of the users 4 Word processors, compilers, web browsers, database systems, video games Users 4 People, machines, other computers 3 Abstract View of Computer Components 4 Operating System Definition There is no universally accepted definition Definition 1: “Everything a vendor ships when you order an operating system” It is a good approximation, but it varies wildly Definition 2: “The one program running at all times on the computer” Actually, it is the kernel, part of the operating system Everything else is either a system program (ships with the operating system, but not part of the kernel) , or an application program, all programs not associated with the operating system 5 Computer Architecture vs. Organization Computer architecture Logical aspects of system implementation as seen by the programmer. 4 E.g.,instruction sets, instruction formats, data types, addressing modes. How to design a computer? Computer organization Deals with all physical aspects of computer systems. E.g., circuit design, control signals, memory types. How to implement the design? 6 Computer Architecture vs. Organization For example: Both Intel and AMD processors have the same X86 architecture, but how the two companies implement that architecture (their computer organizations) is usually very different. The same programs run correctly on both, because the architecture is the same, but they may run at different speeds, because the organizations are different. In practice, there is no clear distinction between computer organization and computer architecture. 7 Computer Architecture Single-processor systems: In the past, most computer systems only used one processor. Multiprocessor systems: On modern computers, from mobile devices to servers, multiprocessor systems now dominate the landscape of computing. Traditionally, such systems have two (or more) processors, each with a single-core CPU. The processors often share the computer bus and sometimes the clock, memory, and peripheral devices. 8 Multiprocessor Systems: SMP The most common multiprocessor systems are Symmetric Multiprocessing (SMP) systems, in which each CPU processor performs all kinds of tasks, including operating-system functions and user processes. 9 Multiprocessor Systems: Multicore Systems The definition of multiprocessor has evolved over time and now includes multicore systems, in which multiple computing cores reside on a single chip. Multicore systems can be more efficient than systems with multiple processors (i.e. each processor includes only one core) because on-chip communication between cores is faster than between-chip communication. In addition, one chip with multiple cores uses significantly less power than multiple single-core chips, an important issue for mobile devices as well as laptops. 10 Multiprocessor Systems: A Dual-Core Design 11 Multiprocessor Systems: NUMA Adding additional CPUs to a multiprocessor system will increase computing power. However, once we add too many CPUs, contention for the system bus becomes a bottleneck and performance begins to degrade. Non-Uniform Memory Access (NUMA): An alternative approach is to provide each CPU (or group of CPUs) with its own local memory that is accessed via a small, fast local bus. The CPUs are connected by a shared system interconnect, so that all CPUs share one physical address space. 12 Multiprocessor Systems: NUMA 13 Clustered Systems Clustered systems differ from the multiprocessor systems in that they are composed of two or more individual systems. Each individual system is typically a multicore computer. Such systems are considered loosely coupled. We should note that the definition of clustered system is not concrete. 4 Many commercial and opensource packages wrestle to define what a clustered system is and why one form is better than another. 4 The generally accepted definition is that clustered computers share storage and are closely linked via a local-area network or a faster interconnect. 14 Computer Organization A typical PC system 15 Computer Organization CPU: There could be one or more processing cores Memory: Typically, there is a shared memory. Input/Output (i.e. I/O): CPU and device controllers access shared memory via the common bus Each device controller is in charge of a particular device type Depending on the controller, more than one device may be attached a device controller 16 Computer Organization In the following subsections, we describe some basics of how such a system operates. We focus on two key aspects of the system: Interrupt Storage We start with interrupts, which alert the CPU to events that require attention. 17 Interrupt An interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. Two types of interrupts: hardware interrupt and software interrupt A software interrupt is also known as a trap or an exception. Software interrupt is caused either by an error (for example, division by zero or invalid memory access) or by a user program’s request for a system call. Interrupt can be used to process a variety of different events, including I/O requests. 18 Interrupt-driven I/O Cycle Note: An interrupt handler is a special block of code to process a specific type of interrupt request. 19 Storage Main memory – the only large storage media that the CPU can access directly, it is volatile Volatile: The memory loses its content when power is turned off Secondary storage – extension of main memory that provides large storage capacity, CPU cannot access it directly and it is non-volatile Hard Disk Drives (HDD) – rigid metal or glass platters covered with magnetic recording material Non-volatile memory (NVM) devices – faster than hard disks, such as Flash Memory card and Solid-State Drive (SSD) 4 Becoming more popular as capacity and performance increases, price drops 20 Storage Hierarchy (such as SSD) 21 OS Operations - Multiprogramming One of the most important features of operating systems is the ability to run multiple programs, which is called multiprogramming. 22 Multiprogramming The operating system keeps several processes in memory simultaneously. The operating system picks and begins to execute one of these processes. Eventually, the process may have to wait for some task, such as an I/O operation, to complete. In a multiprogrammed system, when this event occurs, the operating system simply switches to and executes another process. In a non-multiprogrammed system, CPU would sit idle. When that process needs to wait, the CPU switches to another process, and so on. 23 Multitasking Multiprogramming doesn’t give any guarantee that a program will run in a timely manner. The very first program may run for hours without needing to wait. Multitasking is a logical extension of multiprogramming. In multitasking systems, the CPU executes multiple processes by switching among them, but the switches occur more frequently, providing the user with a fast response time. 24 Dual-mode and Multimode Operation Since the operating system and its users share the hardware and software resources of the computer system, a properly designed operating system must ensure that an incorrect (or malicious) program cannot have a negative impact on other programs, including the operating system itself. In order to ensure the proper execution of the system, we must be able to distinguish the execution of operating-system code and user-defined code. The approach taken by most computer systems is to provide hardware support that allows differentiation of various modes of execution. 25 Dual-mode and Multimode Operation At the very least, we need two modes of operation (i.e. dual-mode operation): Kernel mode (also called supervisor mode, system mode, or privileged mode). User mode When the computer system executes a user application, the system is in user mode. When the computer system executes an operating system task, the system is in kernel mode. 26 Dual-mode and Multimode Operation In an operating system with dual-model operation, a bit, called the mode bit, can be added to the hardware of the computer to indicate the current mode: Kernel: 0 User: 1 With the mode bit, we can distinguish between a task that is executed on behalf of the operating system and one that is executed on behalf of the user. 27 Dual-mode and Multimode Operation Note that when a user application requests a service from the operating system (via a system call), the system must transition from user mode to kernel mode to fulfill the request. 28 Dual-mode and Multimode Operation The concept of modes can be extended beyond two modes. For example: Intel processors have 4 separate protection rings, where ring 0 is kernel mode and ring 3 is user mode. ARM v8 systems have 7 modes. CPUs that support virtualization frequently have a separate mode to indicate when the virtual machine manager (VMM) is in control of the system. In this mode, the VMM has more privileges than user processes but fewer than the kernel. 29 Resource Management An operating system is a resource manager, which manages: Process Memory Secondary/Tertiary-storage File system Cache I/O 30 Process Management A process is a program in execution. It is a unit of work within the system. The operating system is responsible for the following activities in connection with process management: Creating and deleting both user and system processes Scheduling processes and threads on the CPUs Suspending and resuming processes Providing mechanisms for process synchronization Providing mechanisms for process communication We discuss process-management techniques in Chapter 3 to Chapter 8. 31 Memory Management CPU can access the main memory directly while it cannot access hard disk directly. For example, for the CPU to process data from disk, the data must first be transferred to main memory by CPU- generated I/O calls Memory management activities: Keeping track of which parts of memory are currently being used and which process is using them Allocating and deallocating memory space as needed Deciding which processes (or parts thereof) and data to move into and out of memory Memory-management techniques are discussed in Chapter 9 and Chapter 10. 32 Secondary/Tertiary-Storage Management Secondary storage’s advantages over main memory: It is normally larger than main memory It provide non-volatile storage capability. Most modern computer systems use HDDs and NVM as secondary storage devices. Secondary storage management activities: Mounting and unmounting Free-space management Storage allocation Disk scheduling Partitioning Protection (e.g. RAID – Redundant Arrays of Independent Disk) 33 Secondary/Tertiary-Storage Management Tertiary storage: There are many uses for storage that is slower and lower in cost (and sometimes higher in capacity) than secondary storage. Backups of disk data, storage of seldom-used data, and long- term archival storage are some examples. Magnetic tape drives, DVD, and Blu-ray drives are typical tertiary storage devices. Some operating systems take on the management task, while others leave the management to application programs. Some of the functions that operating systems can provide include: Mounting and unmounting Migrating data from secondary to tertiary storage. Techniques for secondary and tertiary storage management are discussed in Chapter 11. 34 File-system Management A file is a collection of related information defined by its creator. Files are usually organized into directories to make them easier to use. File-system management activities: Creating and deleting files and directories Supporting primitives to manipulate files and directories Mapping files onto secondary storage Backing up files onto stable (non-volatile) tertiary media File-system management techniques are discussed in Chapter 13, Chapter 14, and Chapter 15. 35 Cache Management Caching is an important feature of computer systems. This technique attempts to use a small and fast storage device, cache, to significantly improve the overall performance. DRAM 36 Cache Management Here is how caching works: Information is normally kept in some storage system (such as main memory). As it is used, it is copied into a faster storage system (i.e. the cache) on a temporary basis. When we need a particular piece of information, we first check whether it is in the cache. 4 If it is, we use the information directly from the cache. 4 Ifit is not, we use the information from the source, putting a copy in the cache under the assumption that we will need it again soon. 37 Cache Management Cache Main Memory abcd abcd Because the size of cache is limited, cache management is an important design problem. Careful selection of the cache size and of a replacement policy can result in greatly increased performance. The replacement algorithms discussed in Chapter 10 could be used as replacement policies for cache. 38 I/O Subsystem One of the purposes of an operating system is to hide the peculiarities of specific hardware devices from the user. In UNIX, the specifics of I/O devices are hidden by the I/O subsystem. I/O subsystem management activities: Interfacing other system components Managing I/O devices Transferring data Detecting I/O completion I/O subsystem management techniques are discussed in Chapter 12. 39 History of Free Operating Systems - GNU In the early days of modern computing (that is, the 1950s), software generally came with source code. Computer and software companies eventually sought to limit the use of their software to paying customers. Releasing only the binary files compiled from the source code, rather than the source code itself, helped them to achieve this goal. To counter the move to limit software use and redistribution, in 1984, Richard Stallman started developing a free, UNIX- compatible operating system called GNU, which is a recursive acronym for “GNU's Not Unix!”. 40 History of Free Operating Systems - FSF In 1995, Stallman founded the Free Software Foundation (FSF) to support the free-software movement. The free software movement is a social movement with the goal of guaranteeing four essential freedoms for software users. Specifically, the movement insists that software users should be entitled to the following four essential freedoms: (1) The freedom to run the program (2) The freedom to study and change the source code (3) The freedom to redistribute copies (4) The freedom to distribute copies of the modified versions 41 History of Free Operating Systems - GPL Software that comes with these four essential freedoms is called “free software” (also known as libre software). Note that “free” means “free as in freedom”, not “free as in zero price”. Actually, the free-software movement started by Stallman is not opposed to trading a copy for an amount of money. Free software and open-source software are two similar concepts. The detailed difference between them can be found here: https://en.wikipedia.org/wiki/Open-source_software#Free_software The GNU General Public License (GPL) is a license, written by Richard Stallman, to guarantee the essential freedoms. Nowadays, GPL is a common license under which open-source software is released. Fundamentally, the GPL requires that the source code should be distributed with any binaries and that all copies (including modified versions) should be released under the same GPL license. 42 History of Free Operating Systems - GNU/Linux As an example of a free and open-source operating system, let us consider GNU/Linux. By 1991, the GNU operating system was nearly complete. The GNU Project had developed compilers, editors, utilities, libraries, and games—whatever parts it could not find elsewhere. However, the GNU kernel was never fully completed. In 1991, a student in Finland, Linus Torvalds, released a basic UNIX-like kernel using the GNU compilers and tools, and invited contributions via the Internet. 43 History of Free Operating Systems - GNU/Linux Involving the programmers in the Internet meant that anyone interested could download the source code, modify it, and submit changes to Torvalds. Releasing updates once a week allowed this so-called “Linux” operating system to grow quickly. In 1991, Linux was not free software, as its license permitted only non-commercial redistribution. In 1992, however, Torvalds re-released Linux under the GPL, making it free software. 44 History of Free Operating Systems - GNU/Linux The resulting GNU/Linux operating system (with the kernel properly called Linux, but the full operating system including GNU tools called GNU/Linux) has spawned hundreds of unique distributions, or custom builds, of the system. Major distributions include: Red Hat SUSE Fedora Debian Ubuntu Distributions vary in function, utility, installed applications, hardware support, user interface, and purpose. 45 History of Free Operating Systems - BSD UNIX BSD UNIX has a longer and more complicated history than Linux. UNIX was developed at AT&T in the early 1970s. BSD UNIX started in 1978 as a derivative of AT&T's UNIX, which was not open source. Eventually a fully functional, open-source version, 4.4BSD- lite, was released in 1994. Just like Linux, there are many distributions of BSD UNIX, including: FreeBSD NetBSD OpenBSD 46 History of Free Operating Systems - BSD UNIX Darwin, the core kernel component of macOS, is based on BSD UNIX and is open-sourced as well. That source code is available from http://www.opensource.apple.com/ Every macOS release has its open-source components posted at that site. The name of the package that contains the kernel begins with “xnu.” Apple also provides extensive developer tools, documentation, and support at http://developer.apple.com. 47 Chapter 2 Operating System Structures Dalhousie University CSCI 3120: Operating Systems Chapter 2: Operating System Structures Operating System Services User Interface System Calls Compiler, Linker and Loader Operating System Design and Implementation Operating System Structure Building and Booting an Operating System 2 Operating System Services Operating systems provide a series of services to programs and users. 3 Operating System Services One set of operating-system services provide functions that are helpful to the user: User interface - Almost all operating systems have a user interface (UI). 4 Variesbetween Command-Line Interface (CLI), Graphical User Interface (GUI), touch screen Program execution - The system must be able to load a program into memory and to run that program, end execution, either normally or abnormally (indicating error) I/O operations - A running program may require I/O, which may involve a file or an I/O device 4 Operating System Services File-system manipulation - The file system is of particular interest. Programs need to read and write files and directories, create and delete them, search them, list file Information, permission management. Communications – Processes may exchange information, on the same computer or between computers over a network Error detection – OS needs to be constantly aware of possible errors 4 May occur in the CPU and memory hardware, in I/O devices, in user program 4 For each type of error, OS should take the appropriate action to ensure correct and consistent computing 5 Operating System Services Another set of OS functions exist to ensure the efficient operation of the system itself Resource allocation - When multiple users or multiple jobs running concurrently, resources must be allocated to each of them 4 Manytypes of resources - CPU cycles, main memory, file storage, I/O devices. Accounting (i.e. Logging) - To keep track of which users use how many and what kinds of computer resources Protection and security - The owners of information stored in a multiuser or networked computer system may want to control use of that information 6 User Interface - CLI Command-Line Interface (CLI), a.k.a. command interpreter, allows direct command entry CLI fetches a command from user and executes it Two types of commands: You can use type a-command” 4 Built-in commands: These commands are part of the CLI program 4 External commands: These commands correspond to independent programs 4 You can use the command “type” to check the type of a command, for example: $type cd cd is a shell builtin $type cat cat is /bin/cat 7 User Interface - CLI There could be multiple CLIs in one OS. On Unix/Linux: Ø C Shell Ø Bourne Shell Ø Bash Ø Korn shell, etc. 8 User Interface - GUI Many OSes now include both CLI and GUI interfaces Microsoft Windows: GUI with CLI (known as “Command Prompt”) Apple macOS: GUI (named “Aqua”) with UNIX shells Unix/Linux: CLI (i.e. UNIX shells) with optional GUI interfaces (e.g. CDE, KDE, GNOME) 9 Touchscreen Interfaces Touchscreen devices require new interfaces Mouse not possible or not desired Actions and selection based on gestures Virtual keyboard for text entry 10 System Calls System calls provide an interface to the services made available by an operating system. System calls are generally available as functions written in C and C++. Certain low-level tasks (for example, tasks where hardware must be accessed directly) may have to be written using assembly-language instructions. However, most programmers do not use system calls directly (because they are highly detailed and therefore more difficult to use). Typically, application developers design programs according to a high-level Application Programming Interface (API). The API specifies a set of functions that are available to an application programmer 11 System Calls Three most common APIs: Win32 API: for Windows POSIX API: for POSIX-based systems (including virtually all versions of UNIX, Linux, and macOS), Java API: for the Java virtual machine (JVM) A programmer accesses an API via a library of code provided by the operating system. In the case of UNIX and Linux, for programs written in C language, the library is called C standard library (or libc). 12 13 Types of System Calls System calls can be grouped roughly into six major categories. No. 1: Process control create process, terminate process load, execute process get process attributes, set process attributes wait event, signal event allocate and free memory No. 2: File management create file, delete file open, close file read, write, reposition file get and set file attributes 14 Types of System Calls (cont.) No. 3: Device management request device, release device read, write, reposition get device attributes, set device attributes logically attach or detach devices No. 4: Information maintenance get time or date, set time or date get system data, set system data 15 Types of System Calls (Cont.) No. 5: Communications create, delete communication connection send, receive messages transfer status information attach and detach remote devices No. 6: Protection get file permissions set file permissions allow and deny user access 16 Examples of Windows and Unix System Calls 17 Compiler, Linker and Loader Compiler: Source code needs to be compiled into object files. Object files are designed to be loaded into any physical memory location. Linker: Linker combines object codes into a binary executable file. If the object code corresponding to a library is needed, the library code is also linked into the executable file. Loader: Executable files must be brought into memory by loader to be executed. 18 For a simple program without a linker option, you can use the following command to generate “main”: gcc –o main main.c “l” is the lower case L and it stands for link -lm: It is a linker option. It tells the linker to link (“-l”) the math library (“m”). You need this option when you have “#include ” in your C program. 19 Compiler, Linker and Loader The previous example involves only one source file (i.e. “main.c”). Here is one example illustrating how to compile and execute a C program involving multiple source files: https://www.linuxtopia.org/online_books/an_introduction_to_ gcc/gccintro_11.html 20 Compiler, Linker and Loader The process described thus far assumes that all libraries are linked into the executable file and loaded into memory. In reality, most systems allow a program to dynamically link libraries as the program is loaded. Windows, for instance, supports dynamically linked libraries (DLLs). The benefit of this approach is that it avoids linking and loading libraries that may end up not being used by an executable file. Instead, the library is conditionally linked and is loaded if it is required during program run time. 21 Compiler, Linker and Loader Object files and executable files typically have standard formats that include: The compiled machine code A symbol table containing metadata about functions Variables that are referenced in the program. For UNIX/Linux systems, this standard format is known as Executable and Linkable Format (ELF). Windows systems use the Portable Executable (PE) format. macOS uses the Mach-O format. 22 Operating System Design At the highest level, the design of the system will be affected by the choice of hardware and the type of system (traditional desktop/laptop, mobile, distributed, or real time). Beyond this highest design level, the design could be based on two groups of goals: user goals and system goals. User goals – operating system should be convenient to use, easy to learn, reliable, safe, and fast. System goals – operating system should be easy to design, implement, and maintain, as well as flexible, reliable, error- free, and efficient. 23 Operating System Implementation Early operating systems were written in assembly language. For modern operating systems, most are written in higher-level languages such as C or C++, only a small number of systems are written in assembly language. The lowest levels of the kernel might be written in assembly language and C. Higher-level routines might be written in C and C++, The advantages of using a higher-level language: The code can be written at a faster speed The code is more compact The code is easier to understand and debug 24 Operating System Structure A system as large and complex as a modern operating system must be engineered carefully. A common approach is to partition the task into small components. Each of these modules should be a well-defined portion of the system, with carefully defined interfaces and functions. In this course, we discuss five different structures: Monolithic Structure Layered Structure Microkernel Structure Modular Structure Hybrid Structure 25 Monolithic Structure – Original UNIX The simplest structure for organizing an operating system is no structure at all. That is, place all of the functionality of the kernel into a single, static binary file that runs in a single address space. This approach is known as the monolithic structure. An example of this type of structure is the original UNIX operating system, which consists of two separable parts: Kernel System programs 26 Monolithic Structure – Original UNIX Original UNIX: Monolithic Structure 27 Monolithic Structure – Linux The Linux operating system is based on UNIX and is structured similarly, as shown in the next slide. Applications typically use the standard C library (note that the standard C library on Linux is called glibc) when communicating with the system call interface to the kernel. The Linux kernel is monolithic in that it runs entirely in kernel mode in a single address space. However, as we shall see in Section 2.8.4, it does have a modular design that allows the kernel to be modified during run time. 28 Monolithic Structure – Linux Linux: Monolithic + Modular Design glibc: The GNU C Library, commonly known as glibc, is the GNU Project's implementation of the C standard library. 29 Layered Structure The operating system is divided into a number of layers (levels), each built on top of lower layers. The bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface. With modularity, layers are selected such that each layer only uses functions and services provided by the layer immediately below it. 30 Microkernel Structure As UNIX expanded, the kernel became large and difficult to manage. In the mid-1980s, researchers at Carnegie Mellon University developed an operating system called “Mach” (pronunciation: /ma:k/) that modularized the kernel using the microkernel approach. This method structures the operating system by: Removing all non-essential components from the kernel Implementing the non-essential components as user-level programs that reside in separate address spaces, which results in a smaller kernel. 31 Microkernel Structure There is little consensus regarding which services should remain in the kernel and which should be implemented in user space. Typically, however, microkernels provide minimal: Process management (i.e. CPU scheduling) Memory management Communication (i.e. IPC: InterProcess Communication) The figure in the next slide illustrates the architecture of a typical microkernel. 32 Microkernel Structure A Typical Microkernel Structure 33 Microkernel Structure The main function of the microkernel is to provide communication between the client program and the various services that are also running in user space. Communication is provided through message passing, which was described in Section 2.3.3.5. The performance of microkernels can suffer due to increased overhead. When two user-level services must communicate, messages must be copied between the services, which reside in separate address spaces. In addition, the operating system may have to switch from one process to the next to exchange the messages. The overhead involved in copying messages and switching between processes has been the largest impediment to the growth of microkernel-based operating systems. 34 Modular Structure Perhaps the best current methodology for operating- system design involves using loadable kernel modules (LKMs). Here, the kernel has a set of core components and can link in additional services via modules, either at boot time or during run time. This type of design is common in modern implementations of UNIX (such as Linux, macOS, and Solaris) as well as Windows. Modular structure resembles layered structure in that each kernel section has defined, protected interfaces. However, modular structure is more flexible than a layered system, because any module can call any other module. 35 Hybrid Structure Most modern operating systems do not follow any strictly- defined structure. Instead, they adopt the hybrid structure, which combines multiple approaches to address performance, security, usability issues. For example: Linux is monolithic, because having the operating system in a single address space provides very efficient performance. However, it is also modular, so that new functionality can be dynamically added to the kernel. The details of Linux and Windows 10 can be found in Chapter 20 and Chapter 21 respectively. 36 Building and Booting an Operating System Most commonly, a computer system, when purchased, has an operating system already installed. However, if you wish to replace the preinstalled operating system or add additional operating systems, then you can generate (or build) an operating system from scratch by following these steps: Step 1: Write the operating system source code (or obtain previously written source code). Step 2: Configure the operating system for the system on which it will run. Step 3: Compile the operating system. Step 4: Install the operating system. Step 5: Boot the computer and its new operating system. 37 Building and Booting Linux Assume that you are using an old-version of Linux, here are the steps required to build a new Linux kernel: Download Linux source code (http://www.kernel.org) Configure kernel using the command “make menuconfig”. 4 This step generates the.config configuration file. Compile the main kernel using the command “make”. 4 The make command compiles the kernel based on the configuration parameters identified in the.config file, producing the file vmlinuz, which is the kernel image. Compile kernel modules using the command “make modules”. 4 Just as with compiling the kernel, module compilation depends on the configuration parameters specified in the.config file. Install kernel modules into vmlinuz using the command “make modules_install”. Install new kernel on the system using the command “make install”. 4 When the system reboots, it will begin running this new operating system. 38 Installing Linux as a Virtual Machine If your computer is powerful enough, you can install Linux as a virtual machine. Suppose that we plan to use VirtualBox to install Ubuntu Linux as a virtual machine, the following steps need to be carried out. Download and install VirtualBox from https://www.virtualbox.org Download the Ubuntu ISO image from https://www.ubuntu.com Instruct VirtualBox to use the ISO as the bootable medium and boot the virtual machine Answer the installation questions and then install and boot the operating system as a virtual machine 39 Installing Linux as a Virtual Machine Other companies that provide virtualization software: Vmware: https://www.vmware.com/ Parallels: https://www.parallels.com/ Having a Linux machine or a Linux virtual machine could help you complete the assignments: 1) You can install an IDE on the machine and test your program locally. 2) Note that you still need to test your program on the testing machine before submitting your assignments. 40 System Boot The process of starting a computer by loading the kernel is known as booting the system. On most systems, the booting process proceeds as follows: Step 1: A small piece of code known as the bootstrap program or boot loader locates the OS kernel. Step 2: The kernel is loaded into memory and started. Step 3: The hardware (such as CPU registers and main memory) is initialized. Step 4: The root file system is mounted. In this section, we briefly describe the boot process. 41 System Boot - BIOS For legacy computers, Step 1 of the booting process is based on Basic Input/Output System (BIOS), a small boot loader. Originally, BIOS was stored in a ROM chip on the motherboard. In modern computer systems, BIOS is stored on flash memory on the motherboard so that it can be rewritten without removing the chip from the motherboard. The BIOS-based Step 1 typically involves three stages: Stage 1: When the computer is first powered on, the small boot loader, BIOS, is executed. Stage 2: This initial boot loader usually does nothing more than loading a second boot loader, which is typically located at a fixed disk location called the boot block. Typically, the second boot loader is a simple program and only knows the address and the length of the remainder of the bootstrap program. Stage 3: In the typical scenario, the remainder of the bootstrap program is executed to locate the OS kernel. 42 System Boot - UEFI For recent computers, the booting process is based on Unified Extensible Firmware Interface (UEFI). UEFI has several advantages over BIOS, including: Better support for larger disks (over 2 TB) Flexible pre-OS environment: UEFI can support remote diagnostics and repair of computers, even with no operating system installed. The details of UEFI can be found here: https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface 43 Chapter 3 Processes Dalhousie University CSCI 3120: Operating Systems Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication IPC in Shared-Memory Systems IPC in Message-Passing Systems Examples of IPC Systems Communication in Client-Server Systems 2 Process Concept Early computers allowed only one program to be executed at a time. This program had the complete control of the system and had the access to all the system's resources. In contrast, modern computer systems allow multiple programs to be loaded into memory and executed concurrently. Formally, a process is a program in execution. In the early days, a process was called a job. In this course, the term “process” is used more often. In some commonly-accepted terms, “job” is used. For these terms, we do not replace it with “process”. 3 Process Concept Text Section: executable code Data Section: global variables Heap Section: memory that is dynamically allocated during run time Stack Section: temporary data storage when functions are invoked It contains function parameters, return addresses, local variables, etc. Layout of a Process in Memory 4 Process Concept Note that the size of the text section and data section are fixed, as their sizes do not change during program run time. However, the stack and heap sections can shrink and grow dynamically during program execution. Each time a function is called, an activation record containing function parameters, local variables, and the return address is pushed onto the stack; when control is returned from the function, the activation record is popped from the stack. Similarly, the heap grows as memory is dynamically allocated, and shrinks when memory is returned to the system. Although the stack and heap sections grow toward each other, the operating system must ensure they do not overlap. 5 Memory Layout of a C Program The data section is divided into two sub-sections: initialized and uninitialized data. A separate section is provided for the argc and argv parameters passed to the main() function. 6 Memory Layout of a C Program With C language, it is possible to pass values from the command line to your C programs when they are executed. These values are called command line arguments and many times they are important for your program especially when you want to control your program from outside instead of hard coding those values inside the code. The command line arguments are handled using main() function arguments. argc indicates the sum of one and the number of command-line arguments. argv[] is an array of strings (character pointers) where each element points to a command-line argument. argv is a pointer to the name of the program argv through argv[argc-1] contain the actual command-line arguments. 7 Process Concept – Process State As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. A process may be in one of the following states: New: The process is being created. Running: Instructions in the process are being executed. Waiting: The process is waiting for some event to occur (such as an I/O completion or reception of a signal). Ready: The process is waiting to be assigned to a processor. Terminated: The process has been terminated. 8 Diagram of Process State Note that the state names are generic. They vary across operating systems. A generic state transition diagram illustrating how a process changes from one state to another is presented below. 9 Process Concept – Process Control Block Process Control Block (PCB), also known as Task Controlling Block (TCB), is a data structure in the OS kernel, which contains the information needed to manage the scheduling of a process. Process state – running, waiting, etc. Process number – Process ID Program counter – location of instruction to execute CPU registers – contents of all process-related registers 10 Process Concept – Process Control Block CPU scheduling information – process priority, pointers to scheduling queues Memory management information – memory allocated to the process Accounting information – CPU used, clock time elapsed since start, time limits I/O status information – I/O devices allocated to process, list of open files 11 Process Concept – Threads The process model discussed so far has implied that a process is a program that performs a single thread of execution. For example, when a process is running a word-processor program, a single thread of instructions is being executed. This single thread of control allows the process to perform only one task at a time. Thus, the user cannot simultaneously type in characters and run the spell checker. Most modern operating systems have extended the process concept to allow a process to have multiple threads of execution and thus to perform more than one task at a time. On systems that support threads, the PCB is expanded to include information for each thread. Details about threads will be discussed in Chapter 4. 12 Process Scheduling – Scheduling Queues In a computer system running concurrent processes, the OS needs to schedule processes effectively. The number of processes currently in memory is known as the degree of multiprogramming. In general, most processes can be described as either I/O bound or CPU bound. An I/O-bound process is one that spends more of its time doing I/O than it spends doing computations. A CPU-bound process, in contrast, generates I/O requests infrequently, using more of its time to do computations. 13 Process Scheduling – Scheduling Queues To schedule processes effectively, an OS typically maintains multiple scheduling queues, such as the ready queue and wait queue (details on next slide): Ready queue – a set of all processes residing in main memory, ready and waiting to be executed 4 This queue is generally stored as a linked list. 4A ready-queue header contains a pointer that points to the first PCB in the list, and each PCB includes a pointer that points to the next PCB in the ready queue. Wait queue – a set of processes waiting for an event (e.g. I/O) 4 This queue adopts a similar data structure. 14 Process Scheduling – Scheduling Queues The ready queue and wait queue 15 Process Scheduling – Queuing Diagram The following queueing diagram illustrates how process scheduling works. Two types of queues are present: the ready queue and a set of wait queues. The circles represent the resources that serve the queues, and the arrows indicate the flow of processes in the system (details on next slide). 16 Process Scheduling – Queuing Diagram A new process is initially put in the ready queue. It waits there till it is selected for execution. Once the process is allocated a CPU core, it will be executed. Afterwards, one of several events could occur: The process could issue an I/O request and then be placed in an I/O wait queue. The process could create a new child process and then be placed in a wait queue while it awaits the child's termination. The process could be forcibly removed from the core (as a result of an interrupt or having its time slice expire) and be put back in the ready queue. In the first two cases, the process eventually switches from the waiting state to the ready state and is then put back in the ready queue. A process continues this cycle until it terminates, at which time it is removed from all queues, and has its PCB and resources deallocated. 17 Process Scheduling - CPU Scheduling A process migrates among the ready queue and various wait queues throughout its lifetime. The role of the CPU scheduler is to select a process from the processes in the ready queue and allocate a CPU core to the selected one. To be fair to all processes and guarantee timely interaction with users, CPU scheduler must select a new process for the CPU frequently. An I/O-bound process may execute for only a few milliseconds before waiting for an I/O request, which helps frequent switching. Although a CPU-bound process requires a CPU core for longer durations, the scheduler is unlikely to grant the core to a process for an extended period. Instead, it is likely designed to forcibly remove the CPU from a CPU-bound process and select another process to run. Therefore, the CPU scheduler executes at least once every 100 milliseconds, although typically much more frequently. 18 Process Scheduling - Context Switch Switching the CPU core to another process requires performing a state save of the current process and a state restore of a different process. This task is known as a context switch and is illustrated in the following figure. When a context switch occurs, the kernel saves the context of the old process in its PCB and loads the saved context of the new process (which is scheduled to run). Context switch time is overhead, because the system does not do useful work while switching. Switching speed varies from machine to machine, depending on the memory speed, the number of registers that must be copied, and the existence of special instructions. Typically, context switch takes several microseconds. 19 Context Switch From Process to Process 20 Operations on Processes - Process Creation OS needs to provide mechanisms for process creation and process termination. The creating process is called a parent process, and the created process is called the child process of the parent process. A child process may in turn create other processes, forming a tree of processes (details about the tree can be found in next slide). 21 Operations on Processes - Process Creation The previous figure illustrates a typical tree of processes for the Linux operating system, showing the name of each process and its pid. Most operating systems (including UNIX, Linux, and Windows) identify each process using a unique process identifier (pid), which is typically an integer number. The systemd process (which always has a pid of 1) serves as the root process for all user processes, and is the first user process created when the system boots. With traditional UNIX, the root process is called the init process. Once the system has booted, the systemd process creates processes which provide additional services such as web server, print server, and ssh server. 22 Operations on Processes - Process Creation When a process creates a new process, two possibilities for execution exist: The parent continues to execute concurrently with its children. The parent waits till some or all of its children have terminated. There are also two address-space possibilities for the new process: The child process is a duplicate of the parent process (it has the same program and data as the parent). The child process has a new program loaded into itself. We focus on how child creation is implemented on UNIX/Linux. 23 Operations on Processes - Process Creation On Unix, a child process is created by the fork() system call. The child process consists of a copy of the address space of the original process. The parent and child process continue to be executed and the next executed instruction is the one right after the fork(), with one difference: 4 The return code for the fork() is zero for the child process. 4 The positive process identifier of the child is returned to the parent. If fork() leads to a failure, it returns -1. After a fork() system call, the child process typically uses one of the exec() family of system calls (which includes several functions) to replace the process's memory space with a new program. 24 Operations on Processes - Process Creation After a fork() system call, the parent process typically issues a wait() system call to move itself into the wait queue till the termination of the child. When the child process terminates (i.e. either all the instructions in the child process are executed or the exit() system call is invoked in the child process), the wait() system call in the parent process is completed and the parent process resumes its execution. 25 In GNU C library, pid_t is implemented as “int”. Normally, a pid is non- negative. Note that execlp() is one of the exec() family of system calls. The child process overlays its address space with the UNIX command ls using the execlp() system call. If execlp() is successful, the child process is replaced by the binary executable of “ls”. If execlp() leads to an error, execlp() returns "-1". 26 wait(&wait_status): 1) The exit status of the child process is assigned to wait_status; 2) The return value of wait(&wait_status): On success, returns the process ID of the terminated child; on failure, -1 is returned. 3) The details can be found here: https://man7.org/linux/man -pages/man2/wait.2.html 27 Operations on Processes - Process Creation The exec() family includes a series of system calls, we focus on two of them: execlp() and execvp() 'l' indicates a list arrangement: Namely, the exec() function uses a series of Null-terminated arguments. 'v' indicates an array (i.e. vector) arrangement: Namely, the exec() function uses an array of Null-terminated arguments. 'p' indicates that the current value of the environment variable PATH should be used when the system searches for executable files. 28 Operations on Processes - Process Creation execlp(): It is used when the number of arguments to be passed to the program to be executed is known in advance. file: It is the name of the file to be executed. If the specified file name includes a slash character (i.e. the full pathname is provided), then PATH is ignored, and the file at the specified pathname is executed. In the previous example, the first argument for execlp() is “ls”. This argument can be replaced by “/usr/bin/ls” if we would like to use this specific executable file. 29 Operations on Processes - Process Creation execlp(): It is used when the number of arguments to be passed to the program to be executed is known in advance. arg0, …, argn: Each of them is a pointer to a NULL-terminated string. The first argument in this list, arg0, corresponds to the name of the executable. Other arguments correspond to the options for the executable. In the previous example, these arguments are “ls -l”. NULL: It is used to mark the end of the argument list. 30 Operations on Processes - Process Creation execvp(): It is used when the number of arguments for the program to be executed is dynamic. file: It is the name of the file to be executed. If the specified filename includes a slash character (i.e. the pathname is provided), then PATH is ignored, and the file at the specified pathname is executed. argv: It is an array of character pointers that point to NULL- terminated strings. Note that the last member of this array must be a NULL pointer. The example program based on execvp() is included in the next slide. 31 Initialize the array of character pointers Note that execvp() is one of the exec() family of system calls. The child process overlays its address space with the UNIX command ls using the execvp() system call. If execvp() is successful, the child process is replaced by the binary executable of ls. If execvp() leads to an error, execvp() returns "-1". 32 Operations on Processes - Process Creation The system calls provided by Linux kernel: https://man7.org/linux/man-pages/index.html fork(): https://www.man7.org/linux/man-pages/man2/fork.2.html exec(): https://man7.org/linux/man-pages/man3/exec.3.html wait(): https://man7.org/linux/man-pages/man2/wait.2.html exit(): https://www.man7.org/linux/man-pages/man3/exit.3.html 33 Operations on Processes - Process Creation As mentioned previously, after a fork() system call: The child process typically uses one of the exec() family of system calls to replace the process's memory space with a new program. The parent typically issues a wait() system call to move itself into the wait queue until the termination of the child. However, the child and the parent do not have to invoke exec() and wait(). If the child and the parent do not invoke these system calls, both processes will continue to be executed and the next instruction is the one right after the fork(). 34 Operations on Processes - Process Creation In the following program, the parent and the child do not invoke exec() and wait(). What will be displayed on the screen? 35 Operations on Processes - Process Creation After fork(), parent and child will continue to execute the 2nd printf() statement. Consequently: 1) “Hello” will be displayed once 2) Thereafter, “bye” will be displayed two times. 36 Operations on Processes - Process Creation If we invoke fork() two times, how many child processes will be created? 37 Operations on Processes - Process Creation Invoking fork() two times will lead to: 2 child processes 1 grandchild process In total, the program will result in 4 processes. Here is a sample output of the program: 38 Operations on Processes - Process Creation We can also use a loop to invoke fork() two times. 39 Operations on Processes - Process Creation With 2 iterations of the loop, 2 child processes and 1 grandchild process will be created. Here is a sample output: We can modify the behavior of the parent and child processes to control the total number of child processes. An example program based on the previous one is included in the next slide. 40 Operations on Processes - Process Creation This example shows how a parent process creates 2 child processes and waits for them to terminate. In this example, no grand child process is created due to the use of exit(). Each child process invokes exit() to terminate itself before next iteration of the loop starts. 41 Operations on Processes - Process Creation WIFEXITED(status) returns true if the child terminated normally. 42 Operations on Processes - Process Creation The differences between this program and the previous one: Each child process invokes exit() to terminate itself before the next iteration of the loop starts. The parent waits till the child processes are finished. As a result, 2 child processes are created and no grandchild process is created. Here is a sample output: 43 Operations on Processes - Process Termination Case #1: A process terminates when: All of its instructions have been executed Alternatively, the process invokes the exit() system call, providing an exit status as a parameter: exit(1); 4 When exit() is invoked, all the resources of the process— including physical and virtual memory, open files, and I/O buffers—are deallocated and reclaimed by the operating system. 4 Typically: – A zero exit status means the execution leads to a success. – A non-zero (ranging from 1 to 255) exit status means the execution leads to a failure. 44 Operations on Processes - Process Termination Case #2: Termination can occur in other circumstances as well. A process can cause the termination of another process via an appropriate system call, such as kill(). Usually, such a system call can be invoked only by the parent of the process that is to be terminated. Note that a parent needs to know the identities of its children if it would like to terminate them. Thus, when one process creates a new process, the identity of the newly created process is passed to the parent. 45 Interprocess Communication Processes executing concurrently can be divided into two categories: A process is independent if it does not share data with any other processes executing in the system. A process is cooperating if it can affect or be affected by the other processes executing in the system. Clearly, any process that shares data with other processes is a cooperating process. There are several reasons for providing an environment that allows process cooperation: Information sharing Computation speedup: If we want a particular task to run faster, we can break it into subtasks, which can be executed in parallel. Modularity: We may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads. 46 Interprocess Communication Cooperating processes require an interprocess communication (IPC) mechanism that will allow them to exchange data. There are two fundamental models for interprocess communication: shared memory and message passing. (a) Shared memory. (b) Message passing. 47 IPC in Shared-Memory Systems Interprocess communication using shared memory requires communicating processes to establish a region of shared memory. Typically, a shared-memory region resides in the address space of the process creating the shared-memory segment. Other processes that wish to communicate using this shared- memory segment must attach it to their address space. To illustrate the concept of cooperating processes, let's consider the producer–consumer problem, which is a common paradigm for cooperating processes. A producer process produces information that is consumed by a consumer process. One solution to the producer–consumer problem uses shared memory. To allow producer and consumer processes to run concurrently, we need to have a buffer that can be filled by the producer and emptied by the consumer. 48 IPC in Shared-Memory Systems Two types of buffers can be used. The unbounded buffer places no practical limit on the size of the buffer. 4 The consumer may have to wait for new items, but the producer can always produce new items. The bounded buffer assumes a fixed buffer size. 4 Inthis case, the consumer must wait if the buffer is empty, and the producer must wait if the buffer is full. Next, let's look more closely at how the bounded buffer is used to implement interprocess communication using shared memory. 49 IPC in Shared-Memory Systems The following variables reside in a region of memory shared by the producer and consumer processes: #define BUFFER_SIZE 10 typedef struct {... } item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0; 50 IPC in Shared-Memory Systems The shared buffer is implemented as a circular array with two logical pointers: in and out. The variable in points to the next free position in the buffer. The variable out points to the first full position in the buffer. The buffer is empty when in == out. The buffer is full when ((in + 1) % BUFFER_SIZE) == out. #define BUFFER_SIZE 10 typedef struct {... } item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0; 51 Shared Memory - Producer Process The code for the producer process is shown below. The producer process has a local variable next_produced in which the new item to be produced is stored. item next_produced; while (true) { while (((in + 1) % BUFFER_SIZE) == out) ; buffer[in] = next_produced; in = (in + 1) % BUFFER_SIZE; } 52 Shared Memory - Consumer Process The code for the consumer process is shown below. The consumer process has a local variable next_consumed in which the item to be consumed is stored. item next_consumed; while (true) { while (in == out) ; next_consumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; } 53 Shared Memory - Number of Items This scheme allows at most (BUFFER_SIZE − 1) items in the buffer at the same time. We leave it as an exercise for you to provide a solution in which BUFFER_SIZE items can be in the buffer at the same time. Hint: One possible solution is based on a flag variable to indicate the number of items in the buffer. We will get back to this example in Chapter 6. One problem that this example did not solve is concurrent memory access. Namely, when both the producer process and the consumer process attempt to access the shared buffer concurrently, what will take place? In Chapter 6 and Chapter 7, we will discuss how synchronization among cooperating processes can be implemented effectively in a shared-memory environment. 54 IPC in Message-Passing Systems The shared-memory scheme requires that: These processes share a region of memory The code for accessing/manipulating the shared memory be written explicitly by the application programmer. Another way to achieve the same goal is for the OS to provide the mechanism for cooperating processes to communicate with each other via a message-passing facility. Message passing provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same memory region. It is particularly useful in a distributed environment, where the communicating processes may reside on different computers connected by a network. 55 IPC in Message-Passing Systems A message-passing facility provides at least two operations: send(message) and receive(message). If process P and Q want to communicate, a communication link must exist between them. Here are a few options about the link and send/receive operations: Direct or indirect communication Synchronous or asynchronous communication No buffering or automatic buffering 56 Message Passing – Direct vs Indirect Under direct communication, each process that wants to communicate must explicitly name the recipient or sender of the communication. With this scheme, the send() and receive() primitives are defined as: send(P, message)—Send a message to process P. receive(Q, message)—Receive a message from process Q. A communication link in this scheme has the following properties: A link can be established between every pair of processes that want to communicate. The processes need to know each other's identity to communicate. A link is associated with exactly two processes. Between each pair of processes, there exists exactly one link. 57 Message Passing – Direct vs Indirect With indirect communication, the messages are sent to and received from mailboxes, or ports. A mailbox can be viewed abstractly as an object into which messages can be placed by processes and from which messages can be removed. Each mailbox has a unique identification. For example, POSIX message queues use an integer value to identify a mailbox. 58 Message Passing – Direct vs Indirect The send() and receive() primitives are defined as follows: send(A, message)—Send a message to mailbox A. receive(A, message)—Receive a message from mailbox A. In this scheme, a communication link has the following properties: A link is established between a pair of processes only if both members of the pair have a shared mailbox. A mailbox may be associated with more than two processes. Between each pair of communicating processes, a number of different links may exist, with each link corresponding to one mailbox. 59 Message Passing – Synchronous vs asynchronous Communication between processes takes place through calls to send() and receive() primitives. There are different design options for implementing each primitive. Message passing may be either blocking or nonblocking — also known as synchronous and asynchronous. Blocking send: After starting to send a message, the sending process is blocked until the message is received by the receiving process or by the mailbox. Nonblocking send: The sending process sends the message and resumes other operations. Blocking receive: After starting to wait for a message, the receiver is blocked until a message is available. Nonblocking receive: The receiver is not blocked while it waits for the message. 60 Message Passing – Synchronous vs asynchronous Different combinations of send() and receive() are possible. The solution to the producer–consumer problem becomes trivial when we use blocking send() and blocking receive() statements. The producer merely invokes the blocking send() call and waits until the message is delivered to either the receiver or the mailbox. message next_produced; while (true) { send(next_produced); } 61 Message Passing – Synchronous vs asynchronous Likewise, when the consumer invokes receive(), it blocks until a message is available. message next_consumed; while (true) { receive(next_consumed) } 62 Message Passing – Buffering Whether communication is direct or indirect, messages exchanged by communicating processes reside in a temporary queue. Both the sender and receiver maintain a queue. Basically, such queues can be implemented in three ways: Zero capacity: The queue has a maximum length of zero. Bounded capacity: The queue has a finite length n; thus, at most n messages can reside in it. Unbounded capacity: The queue's length is potentially infinite. The zero-capacity case is sometimes referred to as a message system with no buffering. The other cases are referred to as message systems with automatic buffering. 63 Shared Memory vs. Message Passing With shared memory, a process initially sets up a region of its memory space to use for the IPC. Once the region is established within the process, the process issues a system call to request that the kernel make the region shared. Other processes would do the same. After the initial system call to set up the shared memory, a process can read from and write to the region just as it would access non- shared data in its memory space. That is, the process could write to the region directly. The data then appears within the context of the other process automatically. There is no explicit system call required to read the new data. 64 Shared Memory vs. Message Passing In message passing, whenever a piece of data is to be exchanged, a process will invoke the kernel with a request to send the information to the target process. That is, the user-mode process will copy the data into a buffer or other form of data structure, then issue a system call to request the data be transferred. Once the kernel is invoked, it will copy the transferred data first into its own memory. At some later point, the target process will also issue a system call to retrieve the data. In short, message passing techniques require the kernel to be involved in every distinct exchange of data. 65 Shared Memory vs. Message Passing Features of Shared Memory IPC: Faster Speed: System calls are required only to establish shared-memory regions. Once shared memory is established, all accesses are treated as routine memory accesses, and no assistance from the kernel is required. More programmer involvement: The code for accessing/manipulating the shared memory be written explicitly by the application programmer. Less suitable for distributed environment: In a distributed environment, communicating processes may reside on different computers connected by a network. It is more difficult to share a memory section. 66 Shared Memory vs. Message Passing Features of Message Passing IPC: Slower Speed: Message-passing systems are typically implemented using system calls and thus require the more time-consuming task of kernel intervention. Less programmer involvement: System calls are utilized to implement message passing. More suitable for distributed environment: In a distributed environment, communicating processes may reside on different computers connected by a network. 67 Examples of IPC Systems - POSIX In this section, we cover two example IPC methods: Shared Memory: POSIX Shared Memory Message Passing: Pipe/FIFO In this course, we first explore the POSIX API for shared memory. With POSIX, shared memory is implemented using memory- mapped file. https://en.wikipedia.org/wiki/Shared_memory - Support_on_Unix-like_systems Step 1: A process must first create a shared-memory object (essentially, a file to be mapped to a memory section) using the shm_open() system call (note that it is also used to open an existing shared-memory object), as follows: shm_fd = shm_open(name, O_CREAT | O_RDWR, 0666); 68 Examples of IPC Systems - POSIX shm_fd = shm_open(name, O_CREAT | O_RDWR, 0666); The first parameter specifies the name of the shared-memory object. Processes that wish to access this shared memory must refer to the object by this name. The second parameter is a bit mask created by performing the OR operation on the involved flags. In this example, the parameter indicates that the shared-memory object is to be created if it does not yet exist (O_CREAT) and that the object is open for reading and writing (O_RDWR). A successful call to shm_open() returns an integer file descriptor for the shared-memory object. The last parameter specifies the access permissions of the new file corresponding to the created object. In this example, “0666” means read and write for owner, group, and other users. 69 Examples of IPC Systems - POSIX Step 2: Once the object is established, the ftruncate() function is used to configure the size of the object in bytes. The following call sets the size of the object to 4,096 bytes. ftruncate(shm_fd, 4096); Step 3: The mmap()function maps a file to a memory section so that file operations aren't needed. It also returns a pointer that is used for accessing the shared- memory object. The programs shown in the following figures use the POSIX API for shared memory to implement the producer–consumer problem. The producer establishes a shared-memory object and writes to shared memory. The consumer reads from shared memory. 70 When the first parameter is set to sprintf(): It generates a C string and 0, the kernel places it in the memory so that ptr chooses the points to it. Note the NULL character at address at which to the end of the string is not included. create the mapping. This is the most portable method of creating a new mapping. Designed memory protection: POSIX PROT_WRITE means it may be Producer written. The last parameter is the offset. When it is set to 0, the start of the file corresponds to the start of the memory object. strlen() returns the MAP_SHARED length of a string, indicates that excluding the NULL updates to the character at the end mapping are visible of the string. to other processes. 71 Designed memory protection: PROT_READ means POSIX it may be read. Consumer remove a shared memory object 72 Examples of IPC Systems - POSIX When you compile the previous programs, you need to link the realtime extensions library by adding “-lrt” to the compiling command. Suppose the previous programs are named “shm-posix-producer.c” and “shm-posix-consumer.c”, then the commands to compile and execute them should be: gcc -o producer shm-posix-producer.c -lrt gcc -o consumer shm-posix-consumer.c -lrt./producer./consumer After “producer” is executed, a file named “OS” corresponding to the shared memory object is placed in /dev/shm After “consumer” is executed, the file named “OS” will disappear since shm_unlink() is invoked. 73 Examples of IPC Systems - POSIX shm_open() https://man7.org/linux/man-pages/man3/shm_open.3.html The operation of shm_open() is analogous to that of open(): https://man7.org/linux/man-pages/man2/open.2.html ftruncate() https://man7.org/linux/man-pages/man2/truncate.2.html mmap() https://man7.org/linux/man-pages/man2/mmap.2.html shm_unlink() https://man7.org/linux/man-pages/man3/shm_unlink.3p.html 74 Examples of IPC Systems - Pipe Pipe is one of the first IPC mechanisms in early UNIX systems. We focus on the pipe implementation on UNIX. Overall, pipe belongs to the IPC category of message-passing. When implementing a pipe, we need to consider four issues: 1. Does the pipe allow bidirectional communication, or is communication unidirectional? 2. If two-way communication is allowed, is it half duplex (data can travel only one way at a time) or full duplex (data can travel in both directions at the same time)? 3. Must a relationship (such as parent–child) exist between the communicating processes? 4. Can the pipes communicate over a network, or must the communicating processes reside on the same machine? In the chapter, we explore two common types of pipes used on UNIX: ordinary pipe and named pipe. 75 Examples of IPC Systems – Ordinary Pipe Ordinary pipes allow two processes to communicate in standard producer–consumer fashion. The producer writes to one end of the pipe (the write end) The consumer reads from the other end (the read end). As a result, ordinary pipes are unidirectional, allowing only one-way communication. If two-way communication is required, two pipes must be used, with each pipe sending data in a different direction. On UNIX systems, ordinary pipes are constructed using the function: pipe(int fd[]) 76 Examples of IPC Systems – Ordinary Pipe pipe(int fd[]) This function creates a pipe that is accessed through the “int fd[]” file descriptors: fd is the read end of the pipe fd is the write end of the pipe. UNIX treats a pipe as a special type of file. Thus, pipes can be accessed using ordinary read() and write() system calls. An ordinary pipe can be used by the creating process and its child process. Typically, a parent process creates a pipe and uses it to communicate with a child process that it creates via fork(). 77 Examples of IPC Systems – Ordinary Pipe The following figure illustrates, any writes by the parent to its write end of the pipe (i.e. fd) can be read by the child from its read end (i.e. fd )of the pipe. 78 Examples of IPC Systems – Ordinary Pipe Example UNIX program: Parent writes and child reads. 79 A pipe has a limited buffer size. If the pipe is full, then a write will block or fail. Since Linux 2.6.11, the pipe buffer is 65,536 bytes. The strlen() function calculates the length of the string, excluding the terminating null byte ('\0'). 80 Examples of IPC Systems – Ordinary Pipe Please note: Ordinary pipe is unidirectional. 4 Either the parent process writes and the child process reads or vice-versa but not both. 4 Inthe previous example, the parent writes to the pipe, and the child reads from it. 4 Itis possible for the child to write and the parent to read, here is one example: https://www.geeksforgeeks.org/pass-the-value-from-child- process-to-parent-process/ 4 Itis important to notice that both parent process and child process initially close their unused ends of the pipe. 81 Examples of IPC Systems – Ordinary Pipe 4 However, what if both the parent and the child needs to write and read from the pipes simultaneously? 4 The solution is two-way communication using two ordinary pipes. 4 Namely, two pipes are required to establish two-way communication. Here is one example: https://www.tutorialspoint.com/inter_process_communic ation/inter_process_communication_pipes.htm Since ordinary pipe is not bidirectional, half duplex/full duplex does not apply to ordinary pipe. Ordinary pipes require a parent–child relationship between the communicating processes. This means that these pipes can be used only for communication between processes on the same machine. 82 Examples of IPC Systems – Named Pipe Ordinary pipes provide a simple mechanism for a pair of processes to communicate. However, once the processes have finished communicating and have terminated, the ordinary pipe does not continue to exist. Named pipes provide a much more powerful communication tool. Communication can be bidirectional. No parent–child relationship is required. Named pipes continue to exist after communicating processes have terminated. However, to use named pipe on UNIX, the communicating processes must reside on the same machine. 83 Examples of IPC Systems – Named Pipe on UNIX Named pipes are referred to as FIFOs in UNIX systems. Once created, they appear as files in the file system. A FIFO is created with the mkfifo() system call and manipulated with the ordinary open(), read(), write(), and close() system calls. A FIFO will continue to exist until it is explicitly deleted from the file system. Although FIFOs allow bidirectional communication, only half- duplex transmission is permitted on UNIX. If data must travel in both directions, two FIFOs are typically used. With the MS Windows version of named pipe, full-duplex communication is allowed, and the communicating processes may reside on either the same or different machines. As mentioned previously, we focus on the UNIX version. 84 Examples of IPC Systems – Named Pipe on UNIX mkfifo() https://man7.org/linux/man-pages/man3/mkfifo.3.html Example C Program Based on FIFO: https://www.geeksforgeeks.org/named-pipe-fifo-example- c-program/ 85 Communications in Client-Server Systems - Socket Shared memory and message passing can be used for both interprocess communication and the communication in client– server systems. In this section, we explore two other strategies for communication in client–server systems: Socket Remote Procedure Call (RPC) Socket Overview: A socket is defined as an endpoint for communication. A pair of processes communicating over a network employs a pair of sockets (one for each process). A socket is identified by an IP address concatenated with a port number, such as “161.25.19.8:80”. 86 Communications in Client-Server Systems - Socket Communication via Sockets 87 Communications in Client-Server Systems - RPC A Remote Procedure Call (RPC) is a special function call. The function (i.e. procedure) is called on the local computer, but the function is executed on the remote computer. Once the function execution is completed, the result is returned back to the local computer. 88 Chapter 4 Threads & Concurrency Dalhousie Univers