🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

RockStarCherryTree

Uploaded by RockStarCherryTree

Tags

operating systems program execution process management computer science

Full Transcript

49 Chapter 2: Program Execution Operating Systems and System So ware 50 This Week’s Program We will explore how programs are...

49 Chapter 2: Program Execution Operating Systems and System So ware 50 This Week’s Program We will explore how programs are managed and executed by the operating system: 1. Processes Abstraction of a program, process control block, states 2. Threads Abstraction of a unit of execution, user vs. kernel threads, multi-threading 3. Scheduling General concepts, batch, interactive, real time Operating Systems and System So ware 51 Processes Operating Systems and System So ware 52 One System, Multiple Programs Your system runs a large number of applications at the same time: E-mail client Web browser(s) Music player Development environment … In such a multiprogramming system, there will be more programs than CPUs available to run them. The job of the OS is to switch between programs very quickly, every tens or hundreds milliseconds, to give the illusion of parallelism. The OS needs to keep track of each running program to manage them properly! Operating Systems and System So ware 53 The Process Model A process is the abstraction that represents an instance of a running program. Processes contain the set of information required for the program to execute properly: Code (the binary representation of the program’s instructions) Memory (variables, stack, heap, etc.) Registers (instruction pointer, stack pointer, etc.) Resources in use (open files, sockets, etc.) They are independent entities that the OS manipulates, choosing which one to run and when. Process switch CPU A B C A B C A Time Caution Program ≠ process! The program is the ‘recipe’ (the code) and the process is an execution of the program. Operating Systems and System So ware 54 Process Creation A new process can be created for various reasons: A user creates it by starting a new program e.g., by clicking on the icon or starting it from the command line Another process creates it using a process creation system call e.g., your web browser starts your PDF reader a er downloading a file When the system starts an initial process is created to initialize the system, launch other programs like the desktop environment, etc. Depending on their purpose, processes can be classified as: Foreground processes that interact with the user e.g., video player, calculator, etc. Background processes that run by themselves with no user interaction required e.g., weather service, notification service, network configuration, etc. Note Background processes that always stay in the background are also called daemons or services. Operating Systems and System So ware 55 Process Hierarchy Most systems keep an association between a process and the process that created it, its parent. In UNIX-based systems, all processes are linked in this way, creating a process tree. The root of the tree is init, the first process created when the system boots. init thermald bluetoothd sshd cupsd GNOME The init process also adopts orphaned processes. NetworkManager thunderbird firefox vscode gnome-terminal e.g., if gnome-terminal terminates without terminating its child processes (ssh and emacs), they become children of init firefox-tab1 firefox-tab2 firefox-tab3 vscode-win1 vscode-win2 ssh emacs Windows does not keep a full process tree. Parent processes are given a handle on their children. The handle can be passed on to another process. Operating Systems and System So ware 56 Process States A er its creation, a process can go into three different states: Ready (also called runnable) The process can be executed and is waiting for the scheduler to allocate a CPU for it Running The process is currently being executed on the CPU Running Blocked The process cannot run because it is waiting for an event to complete 2 4 3 State transitions: Ready Blocked 1. Process creation ( ∅ → ready) 1 5 2. Scheduler picks the process to run on the CPU (ready → running) 3. Scheduler picks another process to run on the CPU (running → ready) 4. Process cannot continue to run, waiting for an event (running → blocked) e.g., keyboard input, network packet, etc. 5. Blocking event is complete, the process is ready to run again (blocked → ready) Operating Systems and System So ware 57 Process Model and Scheduling Processes have different behaviors depending on the program they execute. A command line interpreter (CLI) repeatedly: ▪ Waits for a user to type in commands (running → blocked) ▪ Wakes up and executes them when available (blocked → ready ↔ running) A video player repeatedly: ▪ Reads data from disk and waits for it to be available in memory (running → blocked) ▪ Copies the frames into graphical memory (blocked → ready ↔ running) A program that computes digits of π never waits and only uses the CPU (ready ↔ running) P0 P1 P2 Pn The scheduler is the component of the OS that manages the allocation of CPU resources to processes, depending on their state. Scheduler CPU Operating Systems and System So ware 58 Process Implementation For each process, the OS maintains a process control block (PCB) to describe it. Runtime Memory Files Process ID (PID) Address space Current working directory (CWD) Segments Parent Process ID (PPID) Page table User ID (UID) Memory mappings Registers Group ID (GID) Instruction pointer Stack pointer File descriptors General purpose Page table pointer (CR3) Signals Scheduler metadata Used CPU time Time of next alarm Operating Systems and System So ware 59 Process Memory Layout A process’ memory, or address space, is split into several sections. 0xFFFFFFFF Kernel memory is shared between all processes, accessible only in supervisor mode In 32 bit Linux, 1 GB for the kernel, 3 GB for user space. Kernel In 64 bit Linux, 128 TB for the kernel, 128 TB for user space reserved for the kernel, cannot be accessed by user space Text is where the program binary is loaded in memory It contains the actual binary instructions that will be executed by the CPU. 0xC0000000 Stack Data contains the static initialized variables from the binary environment, local variables Initialized global variables and static local variables, constant strings e.g., a global int count = 0; or a local static int idx = 0; BSS (block starting symbol) contains static uninitialized variables e.g., a global static int i; Heap dynamically allocated memory Heap contains dynamically allocated memory, grows towards higher addresses e.g., malloc(), brk/sbrk, mmap() BSS uninitialized data Stack contains local variables, environment variable, calling context (stack frame) Grows towards address 0x00000000 Data initialized static variables Text program code 0x00000000 Operating Systems and System So ware 60 Process Registers Registers are the part of the process state that are physically on the CPU Instruction Pointer (IP), or Program Counter (PC) Contains the address of the next instruction to execute if not jumping x86-64: RIP arm64: PC Stack Pointer (SP) Contains the address of the top of the stack x86-64: RSP arm64: SP General Purpose (GP) Contains values or addresses used by instructions x86-64: RAX, RBX, …, R15 arm64: X0–X30 Page Table Pointer Contains the address of the memory mapping x86-64: CR3 arm64: TTBR We’ll come back to how memory mappings and page tables work in a future lecture. Operating Systems and System So ware 61 Context Switch When the OS changes the currently running process, it performs a context switch. The OS needs to exchange the PCB of the current process with the PCB of the next process. Context switching between the running process A and the ready process B happens in several steps: 1. Change the state of the currently running process A to ready 2. Save the CPU registers into A’s PCB 3. Copy the registers from B’s PCB into the CPU registers 4. Change the state of B from ready to running Process A 2 3 Process B CPU Registers Registers Registers Operating Systems and System So ware 62 Process Isolation Another property of processes is that they are isolated from each other. They act independently, as if they were running alone on the machine. They do not share memory, open files, etc. Tip We will see how this is enforced later when discussing memory and file management. Multiprocessing They can communicate and collaborate if explicitly stated in the program. We will discuss that in a future lecture. Operating Systems and System So ware 63 POSIX API: Processes Let’s quickly see the POSIX API to manage processes in C. pid_t fork(void); Create a new process by duplicating the calling process, i.e., its PCB and memory. Upon success, returns the PID of the child in the parent, 0 in the child. Returns -1 upon error. int exec[lvpe](const char *path, const char *arg,...); This family of functions replaces the current program with the one passed as a parameter. Returns -1 upon error. Upon success, it doesn’t return, but the new program starts. pid_t wait(int *status); Wait for any child process to terminate or have a change of state due to a signal. Returns the PID of the child on success, -1 on failure. Operating Systems and System So ware 64 POSIX API: Processes – Example fork.c ❱ gcc -o fork fork.c 1 #include ❱./fork 2 #include Parent PID: 469002 3 #include Fork success, my child has PID 469003 4 #include I am the child! 5 I waited 3 seconds 6 int main(int argc, char **argv) Child has terminated, wait over 7 { 8 pid_t pid; 9 10 printf("Parent PID: %d\n", getpid()); 11 12 pid = fork(); 13 switch (pid) { 14 case -1: 15 printf("[%d] Fork failed...\n", getpid()); 16 return EXIT_FAILURE; 17 case 0: 18 printf("[%d] I am the child!\n", getpid()); 19 sleep(3); 20 printf("[%d] I waited 3 seconds\n", getpid()); 21 break; 22 default: 23 printf("[%d] Fork success, my child is PID %d\n", getpid(), pid 24 wait(NULL); 25 printf("[%d] Child has terminated, wait is over\n", getpid()); 26 } 27 28 return EXIT_SUCCESS; 29 } Operating Systems and System So ware 65 Processes: a Heavyweight Mechanism Applications may benefit from executing multiple tasks at the same time, e.g., your development environment needs to wait for keyboard input, syntax check, render the UI, run extensions, etc. Using one process per task is not always a good idea! Creating a process means copying an existing one → It is a costly operation! Processes don’t share memory → Good for isolation. → Communication must be explicitly set up! Is there a more lightweight mechanism to execute parallel programs! Operating Systems and System So ware 66 Threads A thread is the entity that executes instructions. It has a program counter, registers and a stack. A process contains one or more threads, and threads in the same process share their memory. Threads are sometimes called lightweight processes. Tip Another definition of a process is a set of threads sharing an address space. But we will discuss that in future lectures when discussing memory management. Operating Systems and System So ware 67 Process Model and Thread Model Let’s go back to our process model, and add threads in the mix. Processes group resources together, e.g., memory, open files, signals, etc., while threads execute code. Examples: Web server Text editor Operating Systems and System So ware 68 Threads: Shared vs. Private Data Threads in the same process share some resources, while others are thread-private. Per-process resources Per-thread resources Address space Program counter Global variables Registers Open files Stack Child processes State Pending alarms Signals and signal handlers Accounting metadata Operating Systems and System So ware 69 Threading Implementation Threads are usually implemented as a thread table in the process control block. Runtime Memory Files Process ID (PID) Address space Current working directory (CWD) Segments Parent Process ID (PPID) Page table User ID (UID) Memory mappings Registers Group ID (GID) Page table pointer Instruction pointer (CR3) Stack pointer File descriptors Thread table[ General ] purpose Thread ID pointer (CR3) Page table Registers SignalsInstruction pointer Stack pointer General Scheduler purpose metadata Used CPU time Signals Time of next alarm Scheduler metadata Used CPU time Time of next alarm Operating Systems and System So ware 70 POSIX API: Threads Let’s have a quick look at the main functions of the POSIX API to manipulate threads: int pthread_create(pthread_t *thread, const pthread_attr_t *attr, void *(*function)(void *), void *arg); Create a new thread that will execute function(arg). void pthread_exit(void *retval); Terminate the calling thread. int pthread_join(pthread_t thread, void **retval) Wait for thread to terminate. Note You need to link the pthread library when compiling your programs, by passing the -pthread option to your compiler. Operating Systems and System So ware 71 POSIX API: Threads – Example pthread.c ❱ gcc -o pthread pthread.c -pthread 1 #define _GNU_SOURCE ❱./pthread 2 #include Creating worker... 3 #include Waiting for worker thread... 4 #include String:./pthread, sleeping 3s now... 5 #include Joined worker thread 6 7 void *fn(void *arg) 8 { 9 printf("[%d] String: %s, sleeping 3s now...\n", gettid(), arg); 10 sleep(3); 11 return NULL; 12 } 13 14 int main(int argc, char **argv) 15 { 16 pthread_t worker; 17 int ret; 18 19 printf("[%d] Creating worker...\n", gettid()); 20 21 ret = pthread_create(&worker, NULL, fn, argv); 22 if (ret != 0) { 23 perror("pthread_create failed\n"); 24 return EXIT_FAILURE; 25 } 26 27 printf("[%d] Waiting for worker thread...\n", gettid()); 28 pthread_join(worker, NULL); 29 printf("[%d] Joined worker thread\n", gettid()); 30 31 return EXIT_SUCCESS Operating Systems and System So ware 72 User-level Threads Threading can also be handled at the user level: This allows programs to use the thread model on OSes that do not support threads Scheduling can be performed by the program instead of the OS It is usually provided by: Programming languages, e.g., Go, Haskell, Java Libraries, e.g., OpenMP, Cilk, greenlet These runtime systems (programming language or library) must perform a form of context switching and scheduling. Operating Systems and System So ware 73 Thread Mapping Models The OS only manipulates its own threads, i.e., kernel threads, it has no notion of user-level threads. Thus, user-level threads must be attached to these underlying kernel threads: this is called the thread mapping model. There are three models of how user threads are mapped to kernel threads: 1:1 N:1 N:M Operating Systems and System So ware 74 Thread Mapping Models – 1:1 Each user thread is mapped to a single kernel thread. Threading is completely handled by the OS, thus requiring OS support. + Parallelism + Ease of use - No control over scheduling Examples: Pthread C library, most Java VMs Operating Systems and System So ware 75 Thread Mapping Models – N:1 All user threads are mapped to a single kernel thread. The OS doesn’t know about user threads, and doesn’t need to support threads at all. The runtime system must manage the context switches, scheduling, etc. + Full scheduling control ~ Reduced context switch overhead - No parallelism Examples: Coroutines in Python, Lua, PHP Operating Systems and System So ware 76 Thread Mapping Models – N:M Multiple user threads (N ) are mapped to multiple kernel threads ( M ). Flexible approach that provides the benefits of both 1:1 and N:1 models, at the cost of complexity. Usually, runtime systems statically create one kernel thread per core, and schedule an arbitrary number of user threads on them. + Parallelism + Control over the scheduling - May be complex to manage for the runtime - Performance issues when user and kernel schedulers make conflicting decisions Examples: goroutines in Go, Erlang, Haskell Operating Systems and System So ware 77 User Threads and Blocking Operations When multiple user threads share a kernel thread, they share the kernel threads’s state. If a user thread performs a blocking operation, e.g., reading from the disk, all shared user threads are also blocked. Two main ways of avoiding this issue: Replace all blocking operations with asynchronous operations and wait-free algorithms. This can be done explicitly by programmers, thus increasing the code complexity, or transparently by the runtime system or the compiler. Rely on cooperative multithreading, similar to coroutines in language theory. This type of user threads is also called fibers. Operating Systems and System So ware 78 Scheduling Operating Systems and System So ware 79 The Scheduler The scheduler is the operating system component that allocates CPU resources to threads. It decides which thread runs on which CPU, when and for how long. The scheduling algorithm (or policy) used to take these decisions depends on: The system itself, i.e., how many CPUs, how much memory, etc. The workloads running on it, i.e., interactive applications, long running programs, etc. The goals to achieve, i.e., responsiveness, energy, performance, etc. Operating Systems and System So ware 80 Application Behavior Each application has its own behavior depending on what it does. But most of them alternate computations (on the CPU, ready or running) and waiting on IOs (blocked), e.g., waiting for data from disk. There are two main classes of applications: CPU-bound (or compute-bound): long computations with a few IO waits e.g., data processing applications IO-bound: short computations and frequent IO waits e.g., web server forwarding requests to worker threads long CPU burst CPU-bound thread short CPU burst IO wait IO-bound thread Time Operating Systems and System So ware 81 Triggering the Scheduler Depending on the applications’ and the hardware’s behaviors, the scheduler decides which thread runs on which CPU. Application behavior: 1. When a process/thread is created ( ∅ → ready), i.e., a er a fork()/pthread_create() ⟹ Does the parent or child thread run first? 2. When a thread terminates (ready/running → X) ⟹ Which thread runs next? 3. When a thread waits on for an IO to complete (running → blocked), e.g., waiting on a keyboard input ⟹ Which thread runs next? Hardware behavior: 4. An IO completes, triggering an interrupt and waking a thread (blocked → ready), e.g., data request from the disk is ready, network packet arrival, etc. ⟹ Let the current thread run or replace it with the waking thread? 5. A clock interrupt occurs ⟹ Preemption? Operating Systems and System So ware 82 Preemption An important characteristic of a scheduler is whether it is preemptive or not. A preemptive scheduler can forcefully interrupt a running task to execute a different one. In other words: Non-preemptive schedulers choose a thread to run and let it run until it blocks, terminates or voluntarily yields the CPU Preemptive schedulers might stop a running thread to execute another one a er a given period of time Note Preemption is only possible on systems with a hardware clock. Operating Systems and System So ware 83 Categories of Scheduling Algorithms Depending on the system, workloads and goals of the system, different categories of algorithms should be used: Batch: Business applications, with long running background tasks (compute-bound) e.g., accounting, file conversion/encoding, training machine learning models, etc. ⟶ Usually no preemption to minimize scheduling overhead Interactive: Users are interacting with the system, or IO-bound applications e.g., text editor, web server, etc. ⟶ Preemption is necessary for responsiveness Real time: Constrained environments, where deadlines must be met e.g., embedded systems ⟶ May or may not have preemption, as these systems are usually fully controlled, with no arbitrary programs running Operating Systems and System So ware 84 Scheduling Metrics Depending on the workloads, different goals are desirable for the scheduling algorithm. In other words, the scheduler can optimize different metrics depending on requirements. All categories Interactive Fairness: Processes should get a similar amount of CPU resource Response time/latency: Respond to requests quickly allocated to them. Fairness is not equality, the amount of CPU allocation can be weighted, e.g, depending on priority User experience: The system should feel responsive to users Resource utilization: Hardware resources, e.g., CPU, devices, should be kept busy as much as possible Batch Real time Throughput: Maximize the number of jobs completed per unit of Meeting deadlines: Finish jobs before the requested deadlines to time, e.g., packets per second, MB/s, etc. avoid safety problems Turnaround time: Minimize the duration between job submission Predictability: Behave in a determinist manner, easy to model and completion and understand CPU utilization: Keep the CPU busy as much as possible Operating Systems and System So ware 85 Now, let’s see some scheduling policies! Operating Systems and System So ware 86 Batch Scheduling: First-Come, First-Served The First-Come, First-Served (FCFS) scheduler assigns threads to CPUs in their order of arrival, in a non-preemptive way. Runqueue: Threads are stored in a list sorted by order of arrival, with the first element being the oldest Election: Choose the head of the runqueue Block: Remove the thread from the runqueue and trigger an election Wake up: Add the thread to the runqueue (at the end, like a newly created thread) Event Running Runqueue Initial state ∅ A→B→C→D Election A B→C→D A waits on an IO B C→D A wakes up B C→D→A B terminates C D→A E starts C D→A→E + Easy to implement – With many threads, long latencies for IO-bound threads and high turnaround time for CPU-bound threads Operating Systems and System So ware 87 Batch Scheduling: Shortest Job First The Shortest Job First (SJF) scheduler assumes that a job’s run time is known in advance. It works in the same way as FCFS, except that the job queue is sorted by run time. Runqueue: Threads are stored in a list sorted by increasing total duration Election: Choose the head of the runqueue 4a+3b+2c+d Average turnaround time = 4 Block: Remove the thread from the runqueue and trigger an election Wake up: Add the thread to the runqueue (at its sorted position) Let’s assume four tasks A, B, C and D, arriving in that order and the following known durations: Policy Total turnaround Average turnaround FCFS A (5) B (3) C (4) D (3) 15 10 SJF B (3) D (3) C (4) A (5) 15 8.5 + Optimal when all jobs are known in advance – Does not adapt well to differed job arrivals Operating Systems and System So ware 88 Batch Scheduling: Shortest Remaining Time The Shortest Remaining Time (SRT) scheduler is a modified preemptive version of SJF, where jobs are sorted by their remaining run time. Runqueue: Threads are stored in a list sorted by increasing remaining duration Election: Choose the head of the runqueue Block: Remove the thread from the runqueue and trigger an election Wake up: Add the thread to the runqueue (at its sorted position). If it has a lowest remaining time than the running thread, preempt it Time Running Runqueue Event init ∅ B(1) → C(4) → A(5) 0 B(1) C(4) → A(5) 1 C(4) A(5) 2 D(1) C(3) → A(5) D(1) arrives 3 C(3) A(5) 6 A(5) ∅ Operating Systems and System So ware 89 Interactive Scheduling: Round-Robin Round robin is a simple preemptive scheduler. Each thread is assigned a time interval (quantum) during which it is allowed to run. When a thread uses up its quantum, it is preempted and put back to the end of the queue. Example: quantum = 4ms; four tasks A, B, C and D head C finishes its 4 ms quantum and gets preempted + Easy to implement D is the new running thread running ~ Quantum must be chosen carefully C is put back at the end of the queue as ready D A B C Operating Systems and System So ware 90 Interactive Scheduling: Priority-based Each process is assigned a priority, and the priority-based scheduler always runs the thread with the highest priority. Priorities might be decided by users or by the system depending on the thread behavior (IO- or compute-bound). Example: Multilevel queueing (or MLQ). Priority Queues 0 1 2 3 4 5 6 7 8 9 Multilevel feedback queueing The system may change priorities to avoid threads hogging the CPU or starvation, and assign a quantum to threads to improve fairness. Operating Systems and System So ware 91 Interactive Scheduling: Fair A fair scheduler tries to give the same amount of CPU time to all threads. Some may also enforce fairness between users (so-called fair-share schedulers), e.g., user A starts 9 threads, user B starts 1 thread, they should still get 50% of the CPU each. Operating Systems and System So ware 92 Interactive Scheduling: Lottery A lottery scheduler randomly chooses the next thread running on the CPU. The randomness can be skewed by giving more lottery tickets to higher priority threads. Operating Systems and System So ware 93 Real Time Scheduling Real time schedulers must enforce that threads meet their deadlines. They can be separated in two classes: Hard real time: Deadlines must be met or the system breaks e.g., the brake system of a car must respond to the pedal being pressed very quickly to avoid accidents. So real time: Deadlines can be missed from time to time without breaking the system e.g., a video player must decode 50 frames per second, but missing a frame from time to time is fine. Real time applications may also be: Periodic: the task periodically runs for some (usually short) amount of time, e.g., the frame decoder of the video player is started every 20 ms and must complete in less than 5 ms. Aperiodic: the task occurs in an unpredictable way, e.g., the brake mechanism of a car is triggered when the pedal is pressed and must engage the brakes in less than 1 ms. Examples of real time schedulers: Earliest Deadline First (EDF), Deadline Monotonic, etc. Operating Systems and System So ware 94 Recap Operating Systems and System So ware 95 Recap Operating Systems and System So ware

Use Quizgecko on...
Browser
Browser