Operating Systems: Processes PDF
Document Details
![CoherentBigBen](https://quizgecko.com/images/avatars/avatar-14.webp)
Uploaded by CoherentBigBen
University of Guelph
2025
A. Hamilton-Wright
Tags
Summary
This document is a set of lecture slides for an Operating Systems course at the University of Guelph, dated 2025. The slides cover topics such as processes, interrupts, device drivers and memory management. Various hardware levels and interprocess communications are explored in detail. The document is suitable for undergraduate students.
Full Transcript
Operating Systems: Processes CIS*3110: Operating Systems Dr. A. Hamilton-Wright School of Computer Science University of Guelph 2025-01-28 1 / 56 Parts of a Computer RAM CPU...
Operating Systems: Processes CIS*3110: Operating Systems Dr. A. Hamilton-Wright School of Computer Science University of Guelph 2025-01-28 1 / 56 Parts of a Computer RAM CPU I/O Devices 2 / 56 111110x??...?? 00000 00000 11111 00000 11111 Actual Physical Memory 00000 11111 00000 11111 00000 11111 00000 11111 00000 11111 RAM 00000 11111 00000 11111 "Zero Page" 00000 11111 − interrupt vector 00000 11111 00000 11111 − I/O 00000 11111 BIOS/ROMs (if present) 00000 11111 00000 11111 111 000 0000x00...00 111 3 / 56 Tools at the Hardware Architecture Level program counter (PC) stack pointer/frame pointer processor status register general purpose registers memory management unit (mapping, protection) kernel/user mode interrupts and interrupt vector 4 / 56 Interrupt vector links devices with service routines discintr(...) {... disc service... fn addr 3 } CD service fn addr 2 mouse service fn addr 1 mouseintr(...) kbd service { fn addr 0... } 5 / 56 basic instruction cycle of the ALU of the CPU: 1. fetch instruction pointed to by the PC over and 2. increment PC over again 3. execute instruction processor status (usually a special register on each CPU) user mode – can execute most instructions (but no I/O) kernel mode – can execute all user instructions, PLUS access special instructions and registers that control I/O, memory access (also called supervisor mode) interrupt vector (special range of locations in RAM) used to link hardware interrupts with... interrupt service routines 6 / 56 P1 P2 P3 P4 User Space shared memory crossed via process interrupt or trap table entry Hardware Kernel Space run queue interrupt service routines (UARport contr ard ("bottom half") oller T) interf ork ace alarm o disk... clock serial keyb netw 7 / 56 Program on an Example Machine Assembler Mach Code Virt Addr load R1, 0xA2 0x11A2 0x10 incr R1 0x3100 0x12 store R1, 0xA2 0x21A2 0x14 Op Codes jmp xx 0x5026 0x16 load 0x1 yy: decr R2 0x4200 0x18 store 0x2...... rsb 0x7000 0x24 incr 0x3 xx: zero R2 0x8200 0x26 decr 0x4 incr R2 0x3200 0x28 jmp 0x5 store R2, 0xA0 0x22A0 0x2A jsr 0x6 jsr yy 0x6018 0x2C rsb 0x7 incr R2 0x3200 0x2E zero 0x8 store R2, 0xA2 0x22A0 0x30...... Registers: R0, R1, R2, R3, SP, PC 8 / 56 Context Switch change from one running program to another “on the fly” need to be able to change back later to do this, we must save the “program state” program counter general purpose registers status register stack pointer each program must be resident in different parts of memory 9 / 56 Interrupt (disk) Interrupt (disk) Disk Interrupt (mouse) context context context Mouse switch switch switch context context Program switch context switch switch A B C D E F GH Time 10 / 56 Interrupts a hardware generated jump to subroutine (JSR) requested by an external hardware device (not the CPU) defined through the interrupt vector resides somewhere in physical memory is effectively an array of function pointers may be delayed (i.e.; disabled, possibly temporarily) by masking off 11 / 56 Interrupt Steps Interrupt → Return from Interrupt (RTI) save PC on stack ← switch CPU to kernel restore registers (supervisor) mode restore saved PC value load PC from vector set CPU to previous using hardware id (vector mode contains the start address of the interrupt service routine) save all registers continue execution 12 / 56 Critical Sections a critical section contains reference(s) to data shared between two or more “threads of control” one may be an interrupt this may be two or more processes this may be two or more threads must manage the data so all parties can rely on its value if we fail to do this, data may be wrong for one party, or even corrupted for all load R1, #i i++; ←→ incr R1 store R1, #i 13 / 56 A Critical Example c = ’A ’ ; f o r ( i = 0 ; i < MAX ; i ++) { a [ i ] = c ++; i f ( c > ’E ’ ) c = ’B ’ ; } Interrupt Service Routine set_to_a ( ) { c = ’A ’ ; } 14 / 56 Modern (UNIX-type) systems: 3 types of device drivers BLOCK devices that read/write blocks (disc, tapes, SD-cards, CD-ROM/DVD,... ) NETWORK local area network (Ethernet, token ring), remote connection network interface (PPP, ISDN,... ) CHARACTER everything that does not fit the above two cases (serial ports, display/keyboard, mouse, joystick, touchscreen,... ) 15 / 56 Device Driver Functions Top Half: communicate with user, kernel probe check for hardware at boot time open called when a process does an open of a device file (e.g., /dev/mouse, /dev/disk0, /dev/null etc.) mostly initializes variables and maybe hardware generates a file descriptor in user space close called when a process does a close() of the file de- scriptor created from open (usually just marks the de- vice closed) read called when a process does a read(fd) — transfers data from device to process write called when a process does a write(fd) — queues data for device to output Bottom Half: event handling interrupt service routine called when device generates an interrupt and does the I/O on the hardware 16 / 56 char g e t _ k e y b ( ) { while (???) { } return ???; } void k e y b _ i n t r ( ) { / * " magic " addresses f o r memory mapped I /O * / char * k e y b _ r e g = ( char * ) 0x2ff4200 ; char * k e y b _ s t a t u s = ( char * ) 0x 2 f f 4 2 0 1 ; while (* keyb_status ) ; ??? = * k e y b _ r e g ; } 17 / 56 Circular Queue (Ring Buffer) char Q[QSIZE]; int numQ, startQpos, endQpos; numQ = startQpos = endQpos = 0; 0 ’b’ Add into queue Q[endQpos] = value; 1 ’c’ endQpos = (endQpos + 1) % QSIZE; numQ++; Remove from queue 2 value = Q[startQpos]; startQpos = (startQpos + 1) % QSIZE; numQ–; 3 ’a’ 18 / 56 Critical Sections in Device Driver interrupts can be masked off: int state = spltty() masks interrupt(s) to prevent a second interrupt from over-writing the first int splx(int prevState) returns the interrupt mask to the previous state process can be “put to sleep” until there is space in a buffer: int tsleep(void *id, int flags, char *mesg, int timeo) causes the current thread (program counter) to be BLOCKED until someone calls wakeup() int wakeup(void *id) un-blocks all threads which are asleep and have the given identifier 19 / 56 Device Driver Skeleton – read static char Q[QSIZE]; static int numQ = 0, startQpos = 0, endQpos = 0; devread(char *ubuf) { int s; s = spltty(); while (numQ == 0) { splx(s); tsleep(devread,...); s = spltty(); } *ubuf = Q[ startQpos ]; startQpos = (startQpos + 1) % QSIZE; numQ–; splx(s); } 20 / 56 Device Driver Skeleton – intr devintr() { int s; s = spltty(); if (QSIZE == numQ) { splx(s); return; } Q[ endQpos ] = GET_CHAR_FROM_DEVICE(); endQpos = (endQpos + 1) % QSIZE; numQ++; splx(s); wakeup(devread); } 21 / 56 Running Programs The execution of a program (an “image”) is a process. The environment that the O/S provides for the execution of a program (i.e.; a process) is that of a simplified pseudo-computer. 0xFFFFFFFF k ac St ap G UNIX/Mach process environment: a at D virtual (simplified) address space process context xt Te no direct access to I/O ports 0x00000000 22 / 56 Process Table Entry (Process Context) the process table contains an entry for each current process if a process has more than one thread there is usually one process table entry per thread (ps(1) may or may not show it this way) each entry holds all the information about a process: process (thread) context program counter registers (general and special purpose) stack pointer virtual address space reference scheduling information BLOCKED -vs-RUNNABLE -vs-RUNNING priority time used 23 / 56 Process Status Listing % ps auxw USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 1368 84 ? S Jan11 0:04 init root 2 0.0 0.0 0 0 ? SW Jan11 0:00 [keventd] root 4 0.0 0.0 0 0 ? SWN Jan11 0:00 [ksoftirqd_CPU0] root 9 0.0 0.0 0 0 ? SW Jan11 0:00 [bdflush] root 5 0.0 0.0 0 0 ? SW Jan11 0:00 [kswapd] root 1609 0.0 0.0 1368 144 ? S Jan11 0:00 klogd -x root 1761 0.0 0.0 3508 480 ? S Jan11 0:00 /usr/sbin/sshd root 1920 0.0 0.2 2320 1380 ? S Jan11 0:00 [login] root 1921 0.0 0.0 1352 60 tty2 S Jan11 0:00 /sbin/mingetty t andrew 8506 0.0 0.2 4692 1448 tty1 S 09:33 0:00 -csh andrew 9373 0.0 0.5 8196 2980 pts/7 S 10:43 0:03 vim Processes.te andrew 9795 0.0 0.2 4680 1428 pts/8 S 11:20 0:00 -bin/csh andrew 10154 0.0 0.1 2728 780 pts/7 R 11:36 0:00 ps auxw 24 / 56 Synchronization Primitives Shared data can also be used between processes/threads → we need to manage critical sections at this level. The two most popular synchronization primitives are: mutex: bracket critical section so only 1 process allowed at a time: critical_begin();... code in critical section... critical_end(); semaphores: allow N-at-a-time Co-operating sequential processes can be envisioned as a separate, dedicated processor for each – with no guarantee which runs first/faster etc. 25 / 56 Mutex Synchronization: Producer void add_queue(char aNewChar) { int oldsize; do { critical_begin(); oldsize = numQ; critical_end(); } while (oldsize == QSIZE); critical_begin(); Q[ endQpos ] = aNewChar; endQpos = (endQpos + 1) % QSIZE; numQ++; critical_end(); 26 / 56 Mutex Synchronization: Consumer void del_queue(char aNewChar) { int oldsize, tmpBuf; do { critical_begin(); oldsize = numQ; critical_end(); } while (oldsize == 0); critical_begin(); tmpBuf = Q[ startQpos ]; startQpos = (startQpos + 1) % QSIZE; numQ–; critical_end(); return tmpBuf; 27 / 56 Implementing Mutual Exclusion (in Hardware) s t a t i c mutexFlag = 0 ; critical_begin () { i n t oldFlag ; do { / * * does o l d f l a g = f l a g , f l a g = 1 * / TEST_AND_SET(& o l d F l a g , &mutexFlag ) ; / * * as one i n d i v i s i b l e o p e r a t i o n * / } while ( oldFlag ) ; } critical_end () { mutexFlag = 0 ; } 28 / 56 Dekker’s Algorithm (2 processes) i n t need [ 2 ] = { 0 , 0 } ; int turn ; c r i t i c a l _ b e g i n ( i n t who ) { need [ who ] = 1 ; t u r n = ! who ; w h i l e ( need [ ! who ] && t u r n ! = who ) ; } c r i t i c a l _ e n d ( i n t who ) { need [ who ] = 0 ; } 29 / 56 Semaphores A semaphore is a synchronization primitive which has an internal integer counter supporting the following operations: signal increments the internal counter wait until the internal counter > 0... only then decrement the counter 30 / 56 Semaphore Implementation - Using Mutexes SEM_WAIT(semaphore_t *s) { SEM_SIGNAL(semaphore_t *s) { semaphore_t oldsem; critical_begin(); *s = *s + 1; critical_end(); do { } critical_begin(); oldsem = *s; if (*s > 0) *s = *s - 1; critical_end(); } while (oldsem == 0); commercial/professional implementations use tsleep() to block the process instead of a busy-loop 31 / 56 Semaphore Producer/Consumer available_sem = QSIZE; void del_queue(char aNewChar) { used_sem = 0; int tmpBuf; critical_sem = 1; SEM_WAIT( &used_sem ); SEM_WAIT( &critical_sem ); void add_queue(char aNewChar) { SEM_WAIT( &available_sem ); tmpBuf = Q[ startQpos ]; SEM_WAIT( &critical_sem ); startQpos = (startQpos + 1) Q[ endQpos ] = aNewChar; % QSIZE; endQpos = (endQpos + 1) % QSIZE; SEM_SIGNAL( &critical_sem ); SEM_SIGNAL( &critical_sem ); SEM_SIGNAL( &used_sem ); SEM_SIGNAL( &available_sem ); } return tmpBuf; } 32 / 56 Co-Operating Sequential Processes – Examples semaphore_t U = 1; semaphore_t D = 0; int sharedch = ’a’; up() { down() { while (1) { while (1) { printf("U-%c ", sharedch); printf("D-%c ", sharedch); if(++sharedch > ’z’) if(--sharedch < ’a’) sharedch = ’a’; sharedch = ’a’; } } } } 33 / 56 Co-Operating Sequential Processes – Example #1 semaphore_t U = 1; semaphore_t D = 0; int sharedch = ’a’; up() { down() { while (1) { while (1) { SEM_WAIT( &U ); SEM_WAIT( &D ); printf("U-%c ", sharedch); printf("D-%c ", sharedch); if(++sharedch > ’z’) if(--sharedch < ’a’) sharedch = ’a’; sharedch = ’a’; SEM_SIGNAL( &D ); SEM_SIGNAL( &U ); } } } } Output: U-a D-b U-a D-b U-a... 34 / 56 Co-Operating Sequential Processes – Example #2 semaphore_t U = 2; semaphore_t D = 0; int sharedch = ’a’; up() { down() { while (1) { while (1) { SEM_WAIT( &U ); SEM_WAIT( &D ); SEM_WAIT( &D ); printf("U-%c ", sharedch); printf("D-%c ", sharedch); if(++sharedch > ’z’) if(--sharedch < ’a’) sharedch = ’a’; sharedch = ’a’; SEM_SIGNAL( &D ); SEM_SIGNAL( &U ); SEM_SIGNAL( &U ); } } } } Output: U-a U-b D-c U-c U-d D-e U-d U-e D-f U-e U-f D-g U-f... 35 / 56 Co-Operating Sequential Processes – Example #3 semaphore_t U = 0; semaphore_t D = 2; int sharedch = ’a’; up() { down() { while (1) { while (1) { SEM_WAIT( &U ); SEM_WAIT( &D ); printf("U-%c ", sharedch); SEM_WAIT( &D ); if(++sharedch > ’z’) printf("D-%c ", sharedch); sharedch = ’a’; if(--sharedch < ’a’) SEM_SIGNAL( &D ); sharedch = ’a’; } SEM_SIGNAL( &U ); } SEM_SIGNAL( &U ); } } Output: D-a U-a U-b D-c U-c U-d D-e U-d U-e D-f U-e U-f D-g... 36 / 56 Semaphore Hints: all semaphores must be properly initialized always work from the definition of wait_sem() and signal_sem() normally, there will be a wait_sem with a corresponding signal_sem() on the same semaphore the order of wait_sem() operations is quite important; the order of signal less so 37 / 56 Deadlock deadlock occurs when processes hold resources and they require resources that other processes already hold shown by the presence of a cycle in the request allocation graph P1 R1 R2 P2 38 / 56 Deadlock deadlock occurs when processes hold resources and they require resources that other processes already hold shown by the presence of a cycle in the request allocation graph P1 R1 R2 P2 39 / 56 Deadlock deadlock occurs when processes hold resources and they require resources that other processes already hold shown by the presence of a cycle in the request allocation graph P1 R1 R2 P2 40 / 56 Deadlock deadlock occurs when processes hold resources and they require resources that other processes already hold shown by the presence of a cycle in the request allocation graph P1 R1 R2 P2 41 / 56 Deadlock deadlock occurs when processes hold resources and they require resources that other processes already hold shown by the presence of a cycle in the request allocation graph P1 R1 R2 P2 42 / 56 Deadlock deadlock occurs when processes hold resources and they require resources that other processes already hold shown by the presence of a cycle in the request allocation graph P1 R1 R2 P2 43 / 56 Java synchronized keyword The synchronized keyword can be used on a whole method or a block to provide a critical section method locking: public class SyncCount { private int count = 0; public synchronized void increment() { count++; }; public synchronized int getValue() { return count; }; } block locking: public void someMethod() { while(keepGoing()) { synchronized(someObject) { count++; } } } 44 / 56 IPC in General Shared Memory blocks of shared process image Semaphores/Mutexes integer data values for flow control Signals a software “interrupt” Pipes an imitation file shared by two processes 45 / 56 Signals used to indicate that some important event has happened sequence of events process A is running and performing some task process B sends a signal to process A process A interrupts whatever it is doing... call a signal handler (a function)... returns to whatever it was doing before there is no direct notification to process B that anything has been done 46 / 56 Signal Values Signals are simply integer valued actions, similar to interrupts. Some common signals are: 1 HUP hangup 7 BUS bus error 2 INT interrupt 11 SEGV segmentation violation 3 QUIT quit 30 PWR power failure 4 ILL illegal instruction 15 TERM terminate process 5 TRAP trace trap 10 USR1 user defined #1 9 KILL kill process 12 USR2 user defined #2 The left column contains signals typically found at the indicated numbers. The right column contain common signals whose ID number may vary between platforms; the ID numbers for linux are provided. The numbers for a given platform may be found by running kill -l 47 / 56 Signal Example #i n c l u d e < s i g n a l. h> v o i d s i g n a l H a n d l e r ( i n t code ) { f p r i n t f ( stderr , " I n t e r r u p t caught −− c l e a n i n g up \ n " ) ; performCleanup ( ) ; exit ( 1 ) ; } v o i d main ( ) { s i g n a l ( SIGINT , s i g n a l H a n d l e r ) ;... 48 / 56 Pipes A pipe is a file descriptor pair shared between processes, attached to a buffer common to both processes. The buffer is not stored anywhere in a filesystem, only in memory. P1 P2 (fixed size) buffer 49 / 56 Scheduling Basic process scheduler loop: when a process can no longer run (exits, waiting for I/O or time-slice used up) mark process RUNNABLE or blocked (and why – blocked status) save process context in process table entry search process table for another RUNNABLE process mark this process running restore process context for this process the system now runs this process, picking up where it left off (just as with a hardware interrupt) 50 / 56 Scheduling Algorithm or “Queuing Discipline” FIFO SJF if jobs/process- shortest job first es/... repeatedly join the allow the job with the queue this is typically shortest running time to called “round robin.” go next (jump to the typically, round robin is head of the queue) preemptive not preemptive A preemptive discipline permits a job to be preempted before it completes 51 / 56 Scheduling Algorithms/Queuing Disciplines FIFO and round robin are generally considered to be fair, as jobs cannot be stuck waiting forever SJF is optimal w.r.t. throughput: iff you always run the job that will finish in the shortest time. will always finish more jobs in a given elapsed time than any other ordering it is unfair, and impractical for processes (can be approximated) 52 / 56 Scheduling Algorithms/Queuing Disciplines In general: preemptive schedulers will perform better (i.e.; smoother response) than non-preemptive schedulers and are required for interactive systems schedulers will normally bias towards short jobs, such as updating an editor after a keystroke operators/users will be able to set/adjust priorities for jobs, see nice(1) dynamic priority is updated during context switch based on last run time there will always be a trade-off between coarse -vs-fine granularity of the time-slices: coarse : low overhead/poor interactive response fine : higher overhead/better interactive response 53 / 56 Scheduling Algorithms/Queuing Disciplines if preemption occurs extremely frequently such that each job gets an infinitesimally small time each time it runs, the discipline is called “processor sharing” whereas, if the preemption time is infinitely large, the discipline is called FIFO 54 / 56 Scheduling Algorithms/Types of Tasks batch insensitive to gaps in runtime; get overall task done interactive sensitive to runtime gaps; typically lots of I/O real-time critically sensitive to runtime gaps demand fixed fraction of CPU/unit time 55 / 56 Priority Queues (VMS/Windows) Priority 1 Priority 2 Priority 3 Some systems vary the priority dynamically, others do not 56 / 56