Operating System Chapter 2 - Eng_2024 PDF

Document Details

Uploaded by Deleted User

School of Information and Communication Technology

2024

Đỗ Quốc Huy

Tags

operating systems process management cpu scheduling computer science

Summary

This document is chapter 2 of an operating systems text focusing on process management. It covers process definition, scheduling, queues, and process control blocks, using examples to illustrate concepts. It is a textbook chapter, not an exam.

Full Transcript

Operating System (Principles of Operating Systems) Đỗ Quốc Huy [email protected] Department of Computer Science School of Information and Communication Technology Chapter 2 Process Management ① Process’s definition Chapter 2 Process Man...

Operating System (Principles of Operating Systems) Đỗ Quốc Huy [email protected] Department of Computer Science School of Information and Communication Technology Chapter 2 Process Management ① Process’s definition Chapter 2 Process Management Process (review) A running program Provided resources (CPU, memory, I/O devices...) to complete its task Resources are provided at: process creating time While running system includes many processes running at the same time OS’s : Perform system instruction code User’s: Perform user’s code May contain ≥ 1 threads Chapter 2 Process Management OS’s roles: Guarantee process and thread activities Create/deltete process (user, system) Process scheduling Provide synchronization mechanism, communication and prevent deadlock among processes Chapter 2 Process Management ① Process ② Thread ③ CPU scheduling ④ Critical resource and process synchronization ⑤ Deadlock and solutions Chapter 2 Process Management 1. Process ⚫Concept of process ⚫Process Scheduling ⚫Operations on process ⚫Process cooperation ⚫Inter-process communication Chapter 2 Process Management 1. Process System’s state Processor: Registers’ values Memory: Content of memory block Peripheral devices: Devices’ status Program’s executing ⇒ system’s state changing Change discretely, follow each executed instruction Process: a sequence of system’s state changing Begin with start state Change from state to state is done according to requirement in user’s program Process is the execution of a program Chapter 2 Process Management 1. Process Process >< program #include Program: passive object (content int main() { of a file on disk) printf(“hello world!”); Program’s Code: machine return 0; instruction (CD2190EA...) } Data: Variable stored and used in $gcc hello.c memory Global variable Executable Dynamic allocation (a.out) Process variable (malloc, new,..) $./a.out Stack variable (function’s Disk parameter, local variable) Dynamic linked library (DLL) Not compiled and lined Ram with program Chapter 2 Process Management 1. Process Process >< program #include When program is running, int main() { minimum resource printf(“hello world!”); requirement return 0; Memory for program’s } code and data Processor’s registers $gcc hello.c used for program execution Executable (a.out) Process $./a.out Disk Ram Chapter 2 Process Management 1. Process Process >< program (Cont.) Process: active object (instruction pointer, set of resources) A program can be Part of process’s state One program, many process ( different data set) Example: gcc hello.c || gcc baitap.c Call to many process Chapter 2 Process Management 1. Process Compile and run a program Static library (libc,streams…) Segments Source code Binary file(.o file) Code Data Other segment Data Executing file(.o file) Format is depended on OS Process’s address Code space Chapter 2 Process Management 1. Process Compile and run a program The OS creates a process and allocate a memory area for it System program loader/exec Read and interprets the executive file (file’s header) Set up address space for process to store code and data from executive file Put command’s paramenters, environment variable (argc, argv, envp) into stack Set proper values for processor’s registerand call "_start()" (OS’s fucntion) Program begins to run at "_start()". This function call to main() (program’s functions) ⇒”Process" is running, not mention ”program" anymore When main() function end, OS call to "_exit()" to cancel the process and take back resources Chapter 2 Process Management 1. Process Process’s state Process’s state is part of process’s current activity Process’s states changing flowchart [Silberschatz] System that has only 1 processor Only 1 process in running state Many processes in ready or waiting state Chapter 2 Process Management 1. Process 1.1. Concept of process Process’s state Chapter 2 Process Management 1. Process Process Control Block (PCB) Each process is presented in the system by a process control block PCB: an information structure allows identify only 1 process Process state Process counter CPU’s registers Information for process scheduling Information for memory management Inforamtion of usable resources Statistic information Pointer to another PCB... Chapter 2 Process Management 1. Process 1.1. Concept of process Process Control Block (PCB) Chapter 2 Process Management 1. Process 1.1. Concept of process Process List Chapter 2 Process Management 1. Process 1.1. Concept of process Process Table (Linux) Chapter 2 Process Management 1. Process 1.1. Concept of process Single-thread process and Multi-thread process Single-thread process : A running process with only 1 executed thread Have 1 thread of executive instruction ⇒ Allow executing only 1 task at a moment Multi-thread process : Process with more than 1 executed thread ⇒ Allow more than 1 task to be done at a moment Chapter 2 Process Management 1. Process 1.2. Process Scheduling ⚫Concept of process ⚫Process Scheduling ⚫Operations on process ⚫Process cooperation ⚫Inter-process communication Chapter 2 Process Management 1. Process 1.2. Process Scheduling Introduction Objective Maximize CPU’s usage time ⇒ Need many processes in the system Problem CPU switching between processes ⇒ Need a queue for processes Single processor system ⇒ 1 process running ⇒ Other processes must wait until processor is free Chapter 2 Process Management 1. Process 1.2. Process Scheduling Process queue I The system have many queues for processes Job-queue Set of processes in the system Ready-Queue Set of processes exist in the memory, ready to run Device queues Set of processes waiting for an I/O device. Queues for each device are different Chapter 2 Process Management 1. Process 1.2. Process Scheduling Process queue I Chapter 2 Process Management 1. Process 1.2. Process Scheduling Process queue II  Process moves between different queues  Newly created process is putted in ready queue and wait until it’s selected to execute Chapter 2 Process Management 1. Process 1.2. Process Scheduling Process queue III Process selected and running ① process issue an I / O request, and then be placed in an I / O queue ② process create a new sub-process and wait for its termination ③ process removed forcibly from the CPU, as a result of an interrupt, and be put back in the ready queue In case (1&2) after the waiting event is finished, Process eventually switches from waiting →ready state Process is then put back to ready queue Process continues this cycle (ready, running, waiting) until it terminates. It’s removed from all queues its PCB and resources deallocated Chapter 2 Process Management 1. Process 1.2. Process Scheduling Scheduler Process selection is carried out by appropriate scheduler  Job scheduler; Long-term scheduler  CPU scheduler; Short-term scheduler Chapter 2 Process Management 1. Process 1.2. Process Scheduling Scheduler process selection is carried out by appropriate scheduler ⚫ Job scheduler; Long-term scheduler ⚫ CPU scheduler; Short-term scheduler Chapter 2 Process Management 1. Process 1.2. Process Scheduling Job Scheduler Select processes from program queue stored in disk and put into memory to execute Not frequently (seconds/minutes) Control the degree of the multi-programming (number of process in memory) When the degree of multi-programming is stable, the scheduler may need to be invoked when a process leaves the system Chapter 2 Process Management 1. Process 1.2. Process Scheduling Job Scheduler ⚫ Job selection’s problem ⚫ Consider the following program IO CPU IO Chapter 2 Process Management 1. Process 1.2. Process Scheduling Job Scheduler ⚫ I/O bound process: utilizes less CPU time ⚫ Ex: Graphic programs Chapter 2 Process Management 1. Process 1.2. Process Scheduling Job Scheduler ⚫ CPU bound process: uses more CPU time ⚫ Ex: math solving programs… Chapter 2 Process Management 1. Process 1.2. Process Scheduling Job Scheduler ⚫ Job selection’s problem ⚫ Should select good process mix of both ⇒ I/O bound : ready queue is empty, waste CPU ⇒ CPU bound: device queue is empty, device is wasted If input are the following jobs I/O Heavy CPU Heavy Chapter 2 Process Management 1. Process 1.2. Process Scheduling Job Scheduler ⚫ Job scheduler will arrange the Jobs then send them to Process Scheduler Chapter 2 Process Management 1. Process 1.2. Process Scheduling CPU Scheduler Selects from processes that are ready to execute, and allocates the CPU to 1 of them Frequently work (example: at least 100ms/once) A process may execute for only a few milliseconds before waiting for an I/O request Select new process that is ready the short-term scheduler must be fast 10ms to decide ⇒10/(110)=9% CPU time is wasted Process selection problem (CPU scheduler) Chapter 2 Process Management 1. Process 1.2. Process Scheduling Process swapping (Medium-term scheduler) Task Removes processes from memory, reduces the degree of multiprogramming Process can be brought back into memory and its execution can be continued where it left off Objective: Free memory, make a wider unused memory area Chapter 2 Process Management 1. Process 1.2. Process Scheduling Context switch Switch CPU from 1 process to another process Save the state of the old process and load the saved state for the new process includes the value of the CPU’s registers, the process state and memory-management information Take time and CPU cannot do anything while switching Depend on the support of the hardware More complex the OS, the more works must be done during a context switch Chapter 2 Process Management 1. Process 1.2. Process Scheduling Context switch ⚫ Occurs when an intterupt signal appear( timely interrupt) or when process issue a system call (to perform I/O) ⚫ CPU switching diagram(Silberschatz 2002) ⚫CPU switch from this process to another process (switch the running process) Chapter 2 Process Management 1. Process 1.2. Process Scheduling Process swapping (Medium-term scheduler) Chapter 2 Process Management 1. Process 1.2. Process Scheduling Medium-term scheduler Chapter 2 Process Management 1. Process 1.2. Process Scheduling Schedulers and process’s states New Chapter 2 Process Management 1. Process 1.3. Operations on process ⚫Concept of process ⚫Process Scheduling ⚫Operations on process ⚫Process cooperation ⚫Inter-process communication Chapter 2 Process Management 1. Process 1.3. Operation on process Operations on process ⚫ Process Creation ⚫ Process Termination Chapter 2 Process Management 1. Process 1.3. Operation on process Process Creation ⚫ Process may create several new processes (CreateProcess(), fork()) ⚫ The creating process: parent-process ⚫ The new process: child-process ⚫ Child-process may create new process ⇒ Tree of process Chapter 2 Process Management 1. Process 1.3. Operation on process Process Creation Example: A process tree in Solaris OS Chapter 2 Process Management 1. Process 1.3. Operation on process Example Process Creation in Linux #include else if (pid == 0) { #include execlp("/bin/ls","ls",NULL); int main() } else { { wait(NULL); printf("Child Complete"); pid = fork(); } if (pid < 0) { } fprintf(stderr, "Fork Failed"); return 1; } Chapter 2 Process Management 1. Process 1.3. Operation on process Process Creation Resource allocation Child receives resource from OS parent-process All of the resource a subset of the resources of the parent process (to prevent a process from creating too many children) When a process creates a new process, two possibilities exist in terms of execution: The parent continues to execute concurrently with its children The parent waits until some or all of its children have terminated Chapter 2 Process Management 1. Process 1.3. Operation on process Process Termination Finishes executing its final statement and asks the OS to delete (exit) Return data to parent process Allocated resources are returned to the system Parent process may terminate the execution of child process Parent must know child process’s id ⇒ child process has to send its id to parent process when created Use the system call (abort) Parent process terminate child process when The child has exceeded its usage of some of the resources The task assigned to the child is no longer required The parent is exiting, and the OS does not allow a child to continue ⇒Cascading termination. Example: turn off computer Chapter 2 Process Management 1. Process 1.3. Operation on process Some functions with process in WIN32 API CreateProcess(...) LPCTSTR Name of the execution program LPTSTR Command line paramenter LPSECURITY_ATTRIBUTES A pointer to process security attribute structure LPSECURITY_ATTRIBUTES A pointer to thread security attribute structure BOOL allow process inherit devices (TRUE/FALSE) DWORD process creation flag (Example: CREATE_NEW_CONSOLE) LPVOID Pointer to environment block LPCTSTR full path to program LPSTARTUPINFO info structure for new process LPPROCESS_INFORMATION Information of new process TerminateProcess(HANDLE hProcess, UINT uExitCode) hProcess A handle to the process to be terminated uExitCode process termination code WaitForSingleObject(HANDLE hHandle, DWORD dwMs) hHandle Object handle dwMs Waiting time (INFINITE) Chapter 2 Process Management 1. Process 1.3. Operation on process Some functions with process in WIN32 API(Example) #include #include int main(VOID){ STARTUPINFO si; PROCESS INFORMATION pi; ZeroMemory(&si, sizeof(si)); si.cb = sizeof(si); ZeroMemory(&pi, sizeof(pi)); if (!CreateProcess(NULL, /"C:\\WINDOWS\\system32\\mspaint.exe", NULL, NULL, FALSE, 0, NULL, NULL, &si,&pi)) { fprintf(stderr, "Create Process Failed"); return -1; } WaitForSingleObject(pi.hProcess, INFINITE); printf("Child Complete"); CloseHandle(pi.hProcess); CloseHandle(pi.hThread); } Chapter 2 Process Management 1. Process 1.4 Process cooperation ⚫Concept of process ⚫Process Scheduling ⚫Operations on process ⚫Process cooperation ⚫Inter-process communication Chapter 2 Process Management 1. Process 1.4. Process cooperation Processes’ relation classification Sequential processes PA tsb t tsA tfA PB Parallel processes PA tsb t tfA tsA PB Independence: Not affect or affected by other running process in the system Cooperation: affect or affected by other running process in the system Chapter 2 Process Management 1. Process 1.4. Process cooperation Processes’ relation classification Cooperation in order to Share Information Speedup Computation Provide Modularity Increase the convenience Process collaborate requires mechanism that allow process to Communicate with each other Synchronize their actions Chapter 2 Process Management 1. Process 1.4. Process cooperation Producer - Consumer Problem I The system includes 2 processes Producer creates product Consumer consumes created product Application Printing program (producer) creates characters that are consumed by printer driver (consumer) Compiler (producer) creates assembly code, Assembler (consumer/producer) consumes assembly code and generate object module which is consumed by the exec/loader (consumer) Chapter 2 Process Management 1. Process 1.4. Process cooperation Producer - Consumer Problem II Producer and̀ Consumer work simultaneously Use a sharing Buffer which store product filled in by Producer and taken out by Consumer IN Next empty place in buffer OUT First filled place in the buffer. Counter Number of products in the buffer Producer and Consumer must be synchronized Consumer does not try to consume a product that was not created Unlimited size buffer Buffer is empty -> Consumer need to wait Producer can put product into buffer without waiting Limited size buffer Buffer is empty -> Consumer need to wait Producer need to wait if the Buffer is full Chapter 2 Process Management 1. Process 1.4. Process cooperation Producer - Consumer Problem II Producer while(1) { while (Counter == BUFFER_SIZE) ; Buffer[IN] = nextProduced; IN = (IN + 1) % BUFFER_SIZE; Counter++; } Consumer while(1){ while(Counter == 0) ; nextConsumed = Buffer[OUT]; OUT =(OUT + 1) % BUFFER_SIZE; Counter--; } Chapter 2 Process Management 1. Process 1.5 Inter-process communication ⚫Concept of process ⚫Process Scheduling ⚫Operations on process ⚫Process cooperation ⚫Inter-process communication Chapter 2 Process Management 1. Process 1.5 Inter-process communication Communicate between process Memory sharing model Process share a common memory area Implementation codes are written explicitly by application programmer Example: Producer-Consumer problem Process A Sharing Area Process B Kernel Chapter 2 Process Management 1. Process 1.5 Inter-process communication Memory sharing model Example file mapping in Windows Chapter 2 Process Management 1. Process Ex: File mapping- Process 1 #include pBuf = (LPTSTR) MapViewOfFile(hMapFile, #include // handle to map object #include FILE_MAP_ALL_ACCESS, // read/write permission #include 0, #define BUF_SIZE 256 0, TCHAR szName[]=TEXT("Global\\MyFileMappingObject"); BUF_SIZE); TCHAR szMsg[]=TEXT("Message from first process."); int _tmain() if (pBuf == NULL) { { _tprintf(TEXT("Could not map view of file HANDLE hMapFile; (%d).\n"), LPCTSTR pBuf; GetLastError()); hMapFile = CreateFileMapping( INVALID_HANDLE_VALUE, // use paging file CloseHandle(hMapFile); NULL, // default security return 1; PAGE_READWRITE, // read/write access } 0, // maximum object size (high-order DWORD) BUF_SIZE, // maximum object size (low-order CopyMemory((PVOID)pBuf, szMsg, DWORD) (_tcslen(szMsg) * sizeof(TCHAR))); szName); // name of mapping object _getch(); if (hMapFile == NULL) UnmapViewOfFile(pBuf); { _tprintf(TEXT("Could not create file mapping object (%d).\n"), CloseHandle(hMapFile); GetLastError()); return 1; return 0; } Chapter 2 Process Management 1. Process Ex: File mapping- Process 2 #include pBuf = (LPTSTR) MapViewOfFile(hMapFile #include // handle to map object #include FILE_MAP_ALL_ACCESS, #include // read/write permission #pragma comment(lib, "user32.lib") 0, 0, #define BUF_SIZE 256 BUF_SIZE); TCHAR szName[]=TEXT("Global\\MyFileMappingObject"); if (pBuf == NULL) int _tmain() { { _tprintf(TEXT("Could not map view of HANDLE hMapFile; file (%d).\n"), LPCTSTR pBuf; GetLastError()); hMapFile = OpenFileMapping( CloseHandle(hMapFile); FILE_MAP_ALL_ACCESS, // read/write access FALSE, // do not inherit the name return 1; szName); // name of mapping object } MessageBox(NULL, pBuf, if (hMapFile == NULL) TEXT("Process2"), MB_OK); { _tprintf(TEXT("Could not open file mapping object UnmapViewOfFile(pBuf); (%d).\n"), GetLastError()); return 1; CloseHandle(hMapFile); } return 0; } Chapter 2 Process Management 1. Process 1.5 Inter-process communication Communicate between process Inter-process communication model A mechanism allow processes to communicate and synchronize Often used in distributed system where processes lie on different computers (chat) Guaranteed by message passing system Chapter 2 Process Management 1. Process 1.5 Inter-process communication Message passing system Allow processes to communicate without sharing variable Require 2 basic operations Send (msg) msg has fixed or variable size Receive (msg) If 2 processes P and Q want to communicate, they need to Establish a communication link (physical/logical) between them Exchange messages via operations: send/receive Chapter 2 Process Management 1. Process Direct Communication Each process that wants to communicate must explicitly name the recipient or sender of the communication send (P, message) – Send a message to process P receive(Q, message) – Receive a message from process Q Chapter 2 Process Management 1. Process Direct Communication Properties of a communication link A link is established automatically A link is associated with exactly 2 processes Exactly 1 link exists between each pair of processes Link can be 1 direction but often 2 directions Chapter 2 Process Management 1. Process 1.5 Inter-process communication Indirect Communication Messages are sent to and received from mailboxes, or ports Each mailbox has a unique identification 2 processes can communicate only if they share a mailbox Communication link’s properties Established only if both members of the pair have a shared mailbox A link may be associated with more than 2 processes Each pair of communicating processes may have many links Each link corresponding to 1 mailbox A link can be either 1 or 2 directions Chapter 2 Process Management 1. Process 1.5 Inter-process communication Indirect Communication Operations: Create a new mailbox Send and receive messages through the mailbox Send(A, msg): send a msg to mailbox A Receive(A, msg): receive a msg from mailbox A Delete a mailbox Chapter 2 Process Management 1. Process Synchronization Message passing may be either blocking or nonblocking Blocking synchronous communication Non-blocking asynchronous communication send() and receive() can be blocking or non-blocking Blocking send: sending process is blocked until the message is received by the receiving process or by the mailbox Non-blocking send: The sending process sends the message and resumes operation Blocking receive: receiver blocks until a message is available Non-blocking receive: receiver retrieves either a valid message or a null Chapter 2 Process Management 1. Process 1.5 Inter-process communication Message Buffering Messages exchanged by communicating processes reside in a temporary queue Sender: Send( buff,type,msg) Reciever Buffer Chapter 2 Process Management 1. Process 1.5 Inter-process communication Buffering A queue can be implemented in 3 ways Zero capacity: maximum length 0 => link cannot have any messages waiting in it sender must block until the recipient receives the message Bounded capacity Queue has finite length n ⇒ store at most n messages If the queue is not full, message is placed in the queue and the sender can continue execution without waiting If link is full, sender must block until space is available in the queue Unbound capacity The sender never blocks Chapter 2 Process Management 1. Process 1.5 Inter-process communication Windows XP Message Passing Chapter 2 Process Management 1. Process 1.5 Inter-process communication Communication in Client - Server Systems with Sockets Socket is defined as an endpoint for communication, Each process has 1 socket Made up of an IP address and a port number. E.g.: 161.25.19.8:1625 IP Address: Computer address in the network Port: identifies a specific process Types of sockets Stream Socket: Based on TCP/IP protocol → Reliable data transfer Datagram Socket: based on UDP/IP protocol → Unreliable data transfer Win32 API: Winsock Windows Sockets Application Programming Interface Chapter 2 Process Management 1. Process 1.5 Inter-process communication Winsock API 32 functions socket() Create socket for data communication bind() Assign an identifier for created socket (assign with a port) listen() Listen to one connection accept() Accept connection connect() Connect to the server. send() Send data with stream socket. sendto() Send data with datagram socket. receive() Receive data with stream socket. recvfrom() Receive data with datagram socket. closesocket() End an existed socket........... Chapter 2 Process Management 1. Process 1.5 Inter-process communication Homework Use Winsock to make a Client-Server program Chat program........... Chapter 2 Process Management ① Process ② Thread ③ CPU scheduling ④ Critical resource and process synchronization ⑤ Deadlock and solutions Chapter 2 Process Management 2. Thread 2.1. Introduction ⚫Introduction ⚫Multithreading model ⚫Thread implementation with Windows ⚫Multithreading problem Chapter 2 Process Management 2. Thread 2.1. Introduction ⚫Large size vector computing For Example: (k = 0;k < Vector n;k++)computation { a[k] = b[k]*c[k]; } With multi processors system Chapter 2 Process Management 2. Thread 2.1. Introduction Example: Chat Process P Msg receiving problem Process Q ⚫ Blocking Recieve While (1) { While (1) { ⚫ Non-blocking Receive Receive(P,Msg); ReadLine(Msg); Send(Q,Msg); PrintLine(Msg); Receive(Q,Msg); Solution ReadLine(Msg); PrintLine(Msg); Concurrently perform Send(P,Msg); } Receive & Send } Chapter 2 Process Management 2. Thread 2.1. Introduction Program - Process - Thread ⚫ Program: Sequence of instructions, variables,.. ⚫ Process: Running program: Stack, devices, processor,.. ⚫ Thread: A running program in process context ⚫ Multi-processor → Multi threads, each thread runs on 1 processor ⚫ Different in term of registers’ values, stack’s content Chapter 2 Process Management 2. Thread 2.1. Introduction Single-threaded and multi-threaded process ⚫ Traditional OS (MS-DOS, UNIX) ⚫ Process has 1 controlling thread (heavyweight process) ⚫ Modern OS (Windows, Linux) ⚫ Process may have many threads ⚫ Perform many tasks at a single time Chapter 2 Process Management 2. Thread 2.1. Introduction Example: Word processor (Tanenbaum 2001) Chapter 2 Process Management 2. Thread 2.1. Introduction Example: Multithreaded server (Silberschartz) Chapter 2 Process Management 2. Thread 2.1. Introduction Concept of thread ⚫ Basic CPU using unit, consists of ⚫ Thread ID ⚫ Program Counter ⚫ Registers ⚫ Stack space Chapter 2 Process Management 2. Thread 2.1. Introduction Concept of thread ⚫ Threads in the same process share ⚫ Code segment ⚫ Data segment (global objects) ⚫ Other OS’s resources (opening file…) ⚫ Thread can run same code segment with different context (Register set, program counter, stack) ⚫ LWP: Lightweight Process ⚫ A process has at least one thread Chapter 2 Process Management 2. Thread 2.1. Introduction Process’s memory organization with 4 threads (Linux / x86-32) Chapter 2 Process Management 2. Thread 2.1. Introduction Distinguishing between process and thread Process Thread Has code/data/heap and other segments No separating code or heap segment Has at least 1 thread Not stand alone but must be inside a process Threads in the same process share code/data/heap, Many threads can exist at the same time in each process. devices but have separated stack and registers First thread is the main thread and own process’s stack Create and switch process operations are expansive Create and switch thread operations are inexpensive Good protection due to own address space Common address space, need to protect When process terminated, resources are returned and Thread terminated, its stack is returned threads have to terminated Chapter 2 Process Management 2. Thread 2.1. Introduction Benefits of multithreaded programming ⚫ Responsiveness ⚫ Resource sharing ⚫ Economy ⚫ Utilization of multiprocessor architectures Chapter 2 Process Management 2. Thread 2.1. Introduction Benefits of multithreaded programming ⚫ Responsiveness ⚫ allow a program to continue running even if part of it is blocked or is performing a long operation ⚫ Example: A multi-threaded Web browser ⚫ 1 thread interactives with user ⚫ 1 thread downloads data ⚫ Resource sharing ⚫ threads share the memory and the resources of the process ⚫ Good for parallel algorithms (share data structures) ⚫ Threads communicate via sharing memory ⚫ Allow application to have many threads act in the same address space Chapter 2 Process Management 2. Thread 2.1. Introduction Benefits of multithreaded programming ⚫ Economy ⚫ Create, switch, terminate threads is less expensive than process ⚫ Utilization of multiprocessor architectures ⚫ each thread may be running in parallel on a different processor. Chapter 2 Process Management 2. Thread 2.1. Introduction Benefit of multithreading-> example Tính toán trên vector Vector computing Multi-threading model for (k = 0;k < n;k++) { void fn(a,b) a[k] = b[k]*c[k]; for(k = a; k < b; k ++){ } a[k] = b[k] ∗ c[k]; } void main(){ CreateThread(fn(0, n/4)); CreateThread(fn(n/4, n/2)); CreateThread(fn(n/2, 3n/4)); CreateThread(fn(3n/4, n)); } Chapter 2 Process Management 2. Thread 2.1. Introduction Thread implementation User space Kernel space Chapter 2 Process Management 2. Thread 2.1. Introduction User -Level Threads ⚫ Thread management is done by application ⚫ Kernel does not know about thread existence ⚫ Process scheduled like a single unit ⚫ Each process is assigned with a single state ⚫ Ready, waiting, running,.. ⚫ User threads are supported above the kernel and are implemented by a thread library ⚫ Library support creation, scheduling and management.. Chapter 2 Process Management 2. Thread 2.1. Introduction User -Level Threads ⚫ Advantage ⚫ Fast to create and manage ⚫ Disadvantage ⚫ if a thread perform blocking system call , the entire process will be blocked ⇒ Cannot make use of multi-thread. Chapter 2 Process Management 2. Thread 2.1. Introduction Kernel - Level threads ⚫ Kernel keeps information of process and threads ⚫ Threads management is performed by kernel ⚫ No thread management code inside application ⚫ Thread scheduling is done by kernel ⚫ OSs support kernel thread: Windows NT/2000/XP, Linux, OS/2,.. Chapter 2 Process Management 2. Thread 2.1. Introduction Kernel - Level threads ⚫ Advantage: ⚫ 1 thread perform system call (e.g. I/O request), other threads are not affected ⚫ In a multiprocessor environment, the kernel can schedule threads on different processors ⚫ Disadvantage: ⚫ Slow in thread creation and management Chapter 2 Process Management 2. Thread 2.2.Multi-threading model ⚫Introduction ⚫Multithreading model ⚫Thread implementation with Windows ⚫Multithreading problem Chapter 2 Process Management 2. Thread 2.2.Multi-threading model Introduction ⚫ Many systems provide support for both user and kernel threads - > different multithreading models Chapter 2 Process Management 2. Thread 2.2.Multi-threading model Many-to-One Model ⚫ Maps many user-level threads to 1 kernel thread. ⚫ Thread management is done in user space ⚫ Efficient ⚫ the entire process will block if a thread makes a blocking system call ⚫ multiple threads are unable to run in parallel on multi processors ⚫ implemented on OSs that do not support kernel threads use the many-to-one model Chapter 2 Process Management 2. Thread 2.2.Multi-threading model One-to-one Model ⚫ Maps each user thread to a kernel thread ⚫ Allow another thread to run when a thread makes a blocking system call ⚫ Creating a user thread requires creating the corresponding kernel thread ⚫ the overhead of creating kernel threads can burden the performance of an application ⚫ ⇒ restrict the number of threads supported by the system ⚫ Implemented in Window NT/2000/XP Chapter 2 Process Management 2. Thread 2.2.Multi-threading model Many-to-Many Model ⚫ Multiplexes many user-level threads to a smaller or equal number of kernel threads ⚫ Number of kernel threads: specific to either a particular application or a particular machine ⚫ E.g.: on multiprocessor -> more kernel thread than on uniprocessor ⚫ Combine advantages of previous models ⚫ Developers can create as many user threads as necessary ⚫ kernel threads can run in parallel on a multiprocessor ⚫ when a thread performs a blocking system call, the kernel can schedule another thread for execution ⚫ Supported in: UNIX Chapter 2 Process Management 2. Thread 2.2.Multi-threading model scheduler activation ⚫ A way for kernel to communicate with user level thread manager to maintain an appropriate number of kernel level thread for process Chapter 2 Process Management 2. Thread 2.2.Multi-threading model Two-level model ⚫ A variation of many to many ⚫ Allow favor process with higher priority Chapter 2 Process Management 2. Thread 2.3. Thread implementation with Windows ⚫Introduction ⚫Multithreading model ⚫Thread implementation with Windows ⚫Multithreading problem Chapter 2 Process Management 2. Thread 2.3. Thread implementation with Windows Functions with thread in WIN32 API ⚫ HANDLE CreateThread(...); ⚫ LPSECURITY_ATTRIBUTES lpThreadAttributes, ⇒A pointer to a SECURITY_ATTRIBUTES structure that determines whether the returned handle can be inherited by child processes ⚫ DWORD dwStackSize, ⇒The initial size of the stack, in bytes ⚫ LPTHREAD_START_ROUTINE lpStartAddress, ⇒ pointer to the application-defined function to be executed by the thread ⚫ LPVOID lpParameter, ⇒A pointer to a variable to be passed to the thread ⚫ DWORD dwCreationFlags, ⇒The flags that control the creation of the thread ⚫ CREATE_SUSPENDED : thread is created in a suspended state ⚫ 0: thread runs immediately after creation ⚫ LPDWORD lpThreadId ⇒pointer to a variable that receives the thread identifier ⚫ Return value: handle to the new thread or NULL if the function fails Chapter 2 Process Management 2. Thread 2.3. Thread implementation with Windows Example #include #include void Routine(int *n){ printf("My argument is %d\n", &n); } int main(){ int i, P; DWORD Id; HANDLE hHandles; for (i=0;i < 5;i++) { P[i] = i; hHandles[i] = CreateThread(NULL,0, (LPTHREAD_START_ROUTINE)Routine,&P[i],0,&Id); printf("Thread %d was created\n",Id); } for (i=0;i < 5;i++) WaitForSingleObject(hHandles[i],INFINITE); return 0; } Chapter 2 Process Management 2. Thread 2.3. Thread implementation with Windows Thread in Windows XP Thread includes ⚫ Thread ID ⚫ Registers ⚫ user stack used in user mode, kernel stack used in kernel mode. ⚫ Separated memory area used by runtime and dynamic linked library Executive thread block Kernel thread block Thread environment block Chapter 2 Process Management 2. Thread 2.4. Multithreading problem ⚫Introduction ⚫Multithreading model ⚫Thread implementation with Windows ⚫Multithreading problem Chapter 2 Process Management 2. Thread 2.4. Multithreading problem Example #include #include int x = 0, y = 1; void T1(){ while(1){ x = y + 1; printf("%4d", x); } } void T2(){ while(1){ y = 2; y = y * 2; } } int main(){ HANDLE h1, h2; DWORD Id; h1=CreateThread(NULL,0,(LPTHREAD_START_ROUTINE)T1,NULL,0,&Id); h2=CreateThread(NULL,0,(LPTHREAD_START_ROUTINE)T2,NULL,0,&Id); WaitForSingleObject(h1,INFINITE); WaitForSingleObject(h2,INFINITE); return 0; } Chapter 2 Process Management 2. Thread 2.4. Multithreading problem Running result Chapter 2 Process Management 2. Thread 2.4. Multithreading problem Explanation Thread T1 Thread T2 x=y+1 y=2 x=y+1 y=y*2 x=y+1 The result of parallel running threads depends on the order of accessing the sharing variable t Chapter 2 Process Management ① Process ② Thread ③ CPU scheduling ④ Critical resource and process synchronization ⑤ Deadlock and solutions Chapter 2 Process Management 3. CPU scheduling 3.1. Basic Concepts ⚫Basic Concepts ⚫Scheduling Criteria ⚫Scheduling algorithms ⚫Multi-processor scheduling Chapter 2 Process Management 3. CPU scheduling 3.1. Basic Concepts Introduction ⚫System has one processor → Only 1 process is running at a time ⚫Process is executed until it must wait, typically for the completion of some I/O request ⚫Simple system: CPU is not used ⇒ Waste CPU time ⚫Multiprogramming system: try to use this time productively, give CPU for another process ⚫Several ready processes are kept inside memory at a single time ⚫ When a process has to wait, OS takes the CPU away and gives the CPU to another process ⚫Scheduling is a fundamental OS’s function ⚫Switch CPU between processes → exploit the system more efficiently Chapter 2 Process Management 3. CPU scheduling 3.1. Basic Concepts Introduction Chapter 2 Process Management 3. CPU scheduling 3.1. Basic Concepts CPU-I/0 Burst Cycle ⚫ Process execution consists of a cycle of CPU execution and I/O wait ⚫Begins with a CPU burst ⚫followed by an I/O burst ⚫CPU burst → I/O burst → CPU burst → I/O burst →... ⚫End: the last CPU burst will end with a system request to terminate execution ⚫Process classification ⚫Based on distribution of CPU & I/O burst ⚫ CPU-bound program might have a few very long ⚫ I/O-bound program would typically have many very short CPU bursts ⚫ help us select an appropriate CPU- scheduling algorithm Chapter 2 Process Management 3. CPU scheduling 3.1. Basic Concepts CPU-I/0 Burst Cycle CPU - I/O ⚫Distinguish the types of processes ⚫Based on time allocation for CPU & I / O cycles ⚫ CPU-bound process has several long CPU cycles ⚫ I/0-bound process has many short CPU cycles ⚫To choose the appropriate scheduling algorithm Chapter 2 Process Management 3. CPU scheduling 3.1. Basic Concepts CPU Scheduler ⚫CPU idle -> select 1 process from ready queue to run it ⚫Process in ready queue ⚫ FIFO queue, Priority queue, Simple linked list... ⚫CPU scheduling decisions may take place when process switches from 1) running -> waiting state (I/O request) 2) running -> ready state (out of CPU usage time→ time interrupt) 3) waiting -> ready state (e.g. completion of I/O) 4) Or process terminates ⚫Note ⚫Case 1&4 ⇒ non-preemptive scheduling scheme ⚫Other cases ⇒ preemptive scheduling scheme Chapter 2 Process Management 3. CPU scheduling 3.1. Basic Concepts Preemptive and non-preemptive Scheduling ⚫Non-preemptive scheduling ⚫Process keeps the CPU until it releases the CPU either by ⚫ terminating ⚫ switching to the waiting state ⚫ does not require the special hardware (timer) ⚫Example: DOS, Win 3.1, Macintosh Chapter 2 Process Management 3. CPU scheduling 3.1. Basic Concepts Preemptive and non-preemptive Scheduling ⚫Preemptive scheduling ⚫Process only allowed to run in specified period ⚫End of period, time interrupt appear, dispatcher is invoked to decide to resume process or select another process ⚫Protect CPU from “CPU hungry" processes ⚫Sharing data problem ⚫Process 1 updating data and the CPU is taken ⚫Process 2 executed and read data which is not completely update ⚫Example: Multiprogramming OS WinNT, UNIX Chapter 2 Process Management 3. CPU scheduling 3.1. Basic Concepts Preemptive and non-preemptive Scheduling Chapter 2 Process Management 3. CPU scheduling 3.2. Scheduling Criteria ⚫Basic Concepts ⚫Scheduling Criteria ⚫Scheduling algorithms ⚫Multi-processor scheduling Chapter 2 Process Management 3. CPU scheduling 3.2. Scheduling Criteria Scheduling Criteria I ⚫CPU utilization ⚫ keep the CPU as busy as possible ⚫ should range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily used system) ⚫Throughput ⚫ number of processes completed per time unit ⚫Long process: 1 process/hour ⚫Short processes: 10 processes/second ⚫Turnaround time ⚫Interval from the time of submission of a process to the time of completion ⚫Can be the sum of ⚫periods spent waiting to get into memory ⚫waiting in the ready queue ⚫executing on the CPU ⚫doing I/O Chapter 2 Process Management 3. CPU scheduling 3.2. Scheduling Criteria Scheduling Criteria II ⚫Waiting time ⚫sum of the periods spent waiting in the ready queue ⚫CPU-scheduling algorithm does not affect the amount of time during which a process executes or does I/O; it affects only the amount of time that a process spends waiting in the ready queue ⚫Response time ⚫ time from the submission of a request until the first response is produced ⚫ process can produce some output fairly early ⚫ continue computing new results while previous results are being output to the user ⚫ Assumption: Consider one CPU burst (ms) per process ⚫ Measure: Average waiting time Chapter 2 Process Management 3. CPU scheduling 3.3. Scheduling algorithms ⚫Basic Concepts ⚫Scheduling Criteria ⚫Scheduling algorithms ⚫Multi-processor scheduling Chapter 2 Process Management 3. CPU scheduling 3.3. Scheduling algorithms FCFS: First Come, First Served ⚫Rule: process that requests the CPU first is allocated the CPU first Process owns CPU until it terminates or blocks for I/O request Chapter 2 Process Management 3. CPU scheduling 3.3. Scheduling algorithms FCFS: First Come, First Served ⚫Rule: process that requests the CPU first is allocated the CPU first Process owns CPU until it terminates or blocks for I/O request ⚫Example Process Burst Time ⚫Pros and cons ⚫Simple, easy to implement ⚫Short process must wait like long process ⚫If P1 executed last? Chapter 2 Process Management 3. CPU scheduling 3.3. Scheduling algorithms SJF: Shortest Job First ⚫Rule: associates with each process the length of the latter's next CPU burst ⚫ process that has the smallest next CPU burst ⚫Two methods ⚫Non-preemptive ⚫preemptive (SRTF: Shortest Remaining Time First) ⚫Example Process Burst Time Arrival Time ⚫Pros and Cons ⚫SJF (SRTF) is optimal: Average waiting time is minimum ⚫It’s not possible to predict the length of next CPU burst ⚫Predict based on previous one Chapter 2 Process Management 3. CPU scheduling 3.3. Scheduling algorithms Priority Scheduling ⚫Each process is attached with a priority value (a number) ⚫CPU is allocated to the process with the highest priority ⚫SJF: priority (p) is the inverse of the (predicted) next CPU burst ⚫2 methods ⚫Non-preemptive ⚫Preemptive Process Burst Time Priority Arrival Time ⚫Example 0 1 2 3 4 ⚫“Starvation“ problem: low-priority process has to wait indefinitely (even forever) ⚫Solution: increase priority by the time process wait in the system Chapter 2 Process Management 3. CPU scheduling 3.3. Scheduling algorithms Round Robin Scheduling ⚫Rule ⚫Each process is given a time quantum (time slice) τ to be executed ⚫When time’s up processor is preemptive, and process is placed in the last position of ready queue ⚫If there are n process, longest waiting time is (n − 1)τ ⚫Example Process Burst Time quantum τ = 4 ⚫Problem: select τ ⚫τ large: FCFS ⚫τ small: CPU is switched frequently ⚫Commonly τ = 10-100ms Chapter 2 Process Management 3. CPU scheduling 3.3. Scheduling algorithms Round Robin Scheduling Chapter 2 Process Management 3. CPU scheduling 3.3. Scheduling algorithms Multilevel Queue Scheduling ⚫Ready queue is divided into several separate queues ⚫Processes are permanently assigned to 1 queue ⚫Based on some properties of the process, such as memory size, priority, or type.. ⚫Each queue has its own scheduling algorithm Chapter 2 Process Management 3. CPU scheduling 3.3. Scheduling algorithms Multilevel Queue Scheduling ⚫among the queues, the scheduler must consider ⚫Commonly implemented as fixed-priority preemptive scheduling ⚫Processes in lower priority queue only executed if higher priority queues are empty ⚫High priority process preemptive CPU from lower priority process ⚫Starvation is possible ⚫Time slice between the queues ⚫foreground process queue, 80% CPU time for RR ⚫background process queue, 20% CPU time for FCFS Chapter 2 Process Management 3. CPU scheduling 3.3. Scheduling algorithms Multilevel Queue Scheduling (Example) Chapter 2 Process Management 3. CPU scheduling 3.3. Scheduling algorithms Multilevel Feedback Queue ⚫Allows a process to move between queues ⚫separate processes with different CPU-burst characteristics ⚫ If a process uses too much CPU time ->moved to a lower- priority queue ⚫ I/O-bound and interactive processes in the higher-priority queues ⚫ process that waited too long in a lower- priority queue may be moved to a higher-priority queue ⚫Prevent “starvation” Chapter 2 Process Management 3. CPU scheduling 3.3. Scheduling algorithms Multilevel Feedback Queue ⚫Queue’s scheduler is defined by the following parameters: ⚫number of queues ⚫scheduling algorithm for each queue ⚫ method used to determine when to upgrade/demote a process to a higher/lower priority queue ⚫method used to determine which queue a process will enter when that process needs service Chapter 2 Process Management 3. CPU scheduling 3.3. Scheduling algorithms Multilevel Feedback Queue ⚫Example Chapter 2 Process Management 3. CPU scheduling 3.4. Multi-processor scheduling ⚫Basic Concepts ⚫Scheduling Criteria ⚫Scheduling algorithms ⚫Multi-processor scheduling Chapter 2 Process Management 3. CPU scheduling 3.4. Multi-processor scheduling Problem ⚫the scheduling problem is correspondingly more complex ⚫Load sharing ⚫separate queue for each processor ⚫ one processor could be idle, with an empty queue, while another processor was very busy ⚫ use a common ready queue ⚫problems : →two processors choose the same process or →processes are lost from the queue ⚫asymmetric multiprocessing ⚫ only one processor accesses the system’s queue -> no sharing problem ⚫I/O-bound processes may bottleneck on the CPU that is performing all of the operations Chapter 2 Process Management 3. CPU scheduling Homework ⚫Write program to simulate multilevel feedback queue Chapter 2 Process Management ① Process ② Thread ③ CPU scheduling ④ Critical resource and process synchronization ⑤ Deadlock and solutions Chapter 2 Process Management 4. Critical resource and process synchronization ⚫Critical resource ⚫Internal lock method ⚫Test and Set method ⚫Semaphore mechanism ⚫Process synchronization example ⚫Monitor Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Example #include #include int x = 0, y = 1; void T1(){ while(1){ x = y + 1; printf("%4d", x); } } void T2(){ while(1){ y = 2; y = y * 2; } } int main(){ HANDLE h1, h2; DWORD Id; h1=CreateThread(NULL,0,(LPTHREAD_START_ROUTINE)T1,NULL,0,&Id); h2=CreateThread(NULL,0,(LPTHREAD_START_ROUTINE)T2,NULL,0,&Id); WaitForSingleObject(h1,INFINITE); WaitForSingleObject(h2,INFINITE); return 0; } Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Results Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource 2 processes in the system Producer creates products producer-consumer I Consumer consumes created product Producer and Consumer must be synchronized. Do not create product when there is no space left Do not consume product when there is none Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Producer-Consumer Problem II Producer Consumer while(1) { while(1){ while (Counter == BUFFER_SIZE) ; nextConsumed = Buffer[OUT]; OUT =(OUT + 1) % BUFFER_SIZE; Buffer[IN] = nextProduced; Counter--; Counter++; } } Remark Producer creates 1 product Consumer consumes 1 product ⇒Number of product inside Buffer does not change Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Producer-Consumer Problem II counter++ Counter++ counter-- Load R1, counter Inc R1 Store counter,R1 counter-- Load R2, counter Dec R2 Store counter,R2 R1 = ? t R2 = ? counter = 5 Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Producer-Consumer Problem II counter++ Counter++ counter-- Load R1, counter Load R1,counter Inc R1 Store counter,R1 counter-- Load R2, counter Dec R2 Store counter,R2 R1 = 5 t R2 = ? counter = 5 Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Producer-Consumer Problem II counter++ Counter++ counter-- Load R1, counter Load R1,counter Inc R1 Load R2,counter Store counter,R1 counter-- Load R2, counter Dec R2 Store counter,R2 R1 = 5 t R2 = 5 counter = 5 Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Producer-Consumer Problem II counter++ Counter++ counter-- Load R1, counter Load R1,counter Inc R1 Load R2,counter Store counter,R1 Inc R1 counter-- Load R2, counter Dec R2 Store counter,R2 R1 = 6 t R2 = 5 counter = 5 Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Bài toán người sản xuất (producer)-người tiêu thụ(consumer) II counter++ Counter++ counter-- Load R1, counter Load R1,counter Inc R1 Load R2,counter Store counter,R1 Inc R1 Dec R2 counter-- Load R2, counter Dec R2 Store counter,R2 R1 = 6 t R2 = 4 counter = 5 Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Producer-Consumer Problem II counter++ Counter++ counter-- Load R1, counter Load R1,counter Inc R1 Load R2,counter Store counter,R1 Inc R1 Dec R2 counter-- Store counter,R1 Load R2, counter Dec R2 Store counter,R2 R1 = 6 t R2 = 4 counter = 6 Chương 2 Quản lí tiến trình 4. Tài nguyên găng và điều độ tiến trình 4.1 Khái niệm tài nguyên găng Bài toán người sản xuất (producer)-người tiêu thụ(consumer) II counter++ Counter++ counter-- Load R1, counter Load R1,counter Inc R1 Load R2,counter Store counter,R1 Inc R1 Dec R2 counter-- Store counter,R1 Store counter,R2 Load R2, counter Dec R2 Store counter,R2 R1 = 6 t R2 = 4 counter = 4 Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Definition Resource Everything that is required for process’s execution Critical resource Resource that is limited of sharing capability Required concurrently by processes Can be either physical devices or sharing data Problem Sharing critical resource may not guarantee data completeness ⇒ Require process synchronization mechanism Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Race condition Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Race condition Situation where the results of many processes access the sharing data depend of the order of these actions Make program’s result undefinable Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Race condition Prevent race condition by synchronize concurrent running processes Only 1 process can access sharing data at a time Variable counter in Producer-Consumer problem The code segments that access sharing data in the processes must be executed in a defined order E.g.: x←y+1 instruction in Thread T1 only both two instruction of Thread T2 are done Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Critical section The part of the program where the shared memory is accessed is called the critical region or critical section If there are more than 1 process use critical resource, then we must synchronize them Object: guarantee that no more than 1 process can stay inside critical section Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Conditions to have a good solution ⚫ Mutual Exclusion: Critical resource does not have to serve the number of process more than its capability at any time If process Pi is executing in its critical section, then no other processes could execute in their critical sections Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Conditions to have a good solution ⚫ Mutual Exclusion: ⚫ Progressive: If critical resource still able to serve and there are process want to be executed in critical section then this process can use critical resource Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Conditions to have a good solution ⚫ Mutual Exclusion: ⚫ Progressive: ⚫ Bounded Waiting: There exists a bound on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Rule There are 2 processes P1&P2 concurrently running Sharing the same critical resource Each process put the critical section at begin and the remainder section is next Process must ask before enter the critical section {entry section} Process perform {exit section} after exiting from critical section Program structure Chapter 2 Process Management 4. Critical resource and process synchronization 4.1 Critical resource Methods’ classification Low level method Variable lock Test and set Semaphore High level method Monitor Chapter 2 Process Management 4. Critical resource and process synchronization ⚫Critical resource ⚫Variable lock method ⚫Test and Set method ⚫Semaphore mechanism ⚫Process synchronization example ⚫Monitor Chapter 2 Process Management 4. Critical resource and process synchronization 4.2 Variable lock Principle Chapter 2 Process Management 4. Critical resource and process synchronization 4.2 Variable lock Principle sharing memory area Critical Resource Each process uses 1 byte in the sharing memory area as a lock enters critical section, lock (byte lock = true) exits from critical section, unlock (byte lock= false) Process want to enter critical section: check other process’s lock ‘s status Locking ⇒ Wait Not lock ⇒ Has the right to enter critical section Chapter 2 Process Management 4. Critical resource and process synchronization 4.2 Variable lock Algorithm Share var C1,C2 Boolean // sharing variable used as lock Initialization C1 = C2 = false // Critical resource is free Process P1 Process P2 Process P1’s critical section Process P2’s critical section Process P1’s remaining section Process P2’s remaining section Chapter 2 Process Management 4. Critical resource and process synchronization 4.2 Variable lock Algorithm Share var C1,C2 Boolean // sharing variable used as lock Initialization C1 = C2 = false // Critical resource is free Process P1 Process P2 Process P1’s critical section Process P2’s critical section Process P1’s remaining section Process P2’s remaining section Chapter 2 Process Management 4. Critical resource and process synchronization 4.2 Variable lock Not properly synchronize Remark Two processes request resource at the same time Mutual exclusion problem (Case 1) Progressive problem (Case 2) Reason: The following actions are done separately Test the right to enter critical section Set the right to enter critical section Chapter 2 Process Management 4. Critical resource and process synchronization 4.2 Variable lock ⚫Utilize turn variable to show process with priority Dekker’s algorithm Process P1 Process P2 Process P1’s critical section P2’s critical section P1’s remaining section P2s remaining section Chapter 2 Process Management 4. Critical resource and process synchronization 4.2 Variable lock Synchronize properly for all cases NoRemark hardware support requirement -> implement in any languages Complex when the number of processes and resources increase “busy waiting” before enter critical section When waiting, process must check the right to enter the critical section => Waste processor’s time Chapter 2 Process Management 4. Critical resource and process synchronization ⚫Critical resource ⚫Variable lock method ⚫Test and Set method ⚫Semaphore mechanism ⚫Process synchronization example ⚫Monitor Chapter 2 Process Management 4. Critical resource and process synchronization 4.3 Test anh Set Principle Chapter 2 Process Management 4. Critical resource and process synchronization 4.3 Test anh Set Utilize hardware support Principle Hardware provides uninterruptible instructions Test and change the content of 1 word Swap the content of 2 different words Instruction is executed atomically Code block is uninterruptible when executing When called at the same time, done in any order boolean TestAndSet(VAR boolean target) { void Swap(VAR boolean , VAR boolean b) { boolean rv = target; boolean temp = a; target = true; a = b; return rv; b = temp; } } Chapter 2 Process Management 4. Critical resource and process synchronization 4.3 Test anh Set Sharing variable Boolean: Lock: resource’s status: Algorithm with TestAndSet instruction Locked (Lock=true) Free (Lock=false) Initialization: Lock = false ⇒ Resource is free Algorithm for process Pi Process P’s critical section P’s remaining section Chapter 2 Process Management 4. Critical resource and process synchronization 4.3 Test anh Set Sharing variable Lock shows the resource’s status Algorithm Local variablewith Swap for each instruction process: Key: Boolean Initialization: Lock = false ⇒ Resource is free Algorithm for process Pi Process P’s critical section P’s remaining section Chapter 2 Process Management 4. Critical resource and process synchronization 4.3 Test anh Set Remark Simple, complexity is not increase when number of processes and critical resource increase “busy waiting” before enter critical section When waiting, process has to check if sharing resource is free or not => Waste processor’s time No bounded waiting guarantee The next process enters critical section is dependent on the resource release time of current resource-using process ⇒ Need to be solved Chapter 2 Process Management 4. Critical resource and process synchronization ⚫Critical resource ⚫Variable lock method ⚫Test and Set method ⚫Semaphore mechanism ⚫Process synchronization example ⚫Monitor Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism Semaphore An integer variable, initialized by resource sharing capability Number of available resources (e.g. 3 printers) Number of resource’s unit (10 empty slots in buffer) Can only be changed by 2 operation P and V Operation P(S) (or wait(S)) Operation V(S) (or signal(S)) P and V are uninterruptible instructions Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism Semaphore usage I n-process critical-section problem processes share a semaphore, mutex initialized to 1. Each process Pi is organized as Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism Semaphore usage II The order of execution inside processes: P1 with a statement S1 , P2 with a statement S2. require that S2 be executed only after S1 has completed P1 and P2 share a common semaphore synch, initialized to 0, Code for each process Process 1 Process 2 S1; wait (synch) ; signal (synch) ; S2; { Remainder code}} { Remainder code}} Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism To overcome the need for busy waiting Use 2 operations Block() Temporarily suspend running process Wakeup(P) Resume process P suspended by block() operation Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism To overcome the need for busy waiting (cont. 1) Block() Temporarily suspend running process When a process executes the wait operation and semaphore value is not positive it must wait. (block itself -> not busy waiting) block operation places a process into a waiting queue associated with the semaphore, Process’s state is switched to the waiting Control is transferred t o the CPU scheduler, Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism To overcome the need for busy waiting (cont.2) Wakeup(P) Resume process P suspended by block() operation process that is blocked, waiting on a semaphore S, restarted when some other process executes a signal operation. The process is restarted by a wakeup operation changes the process from the waiting state to the ready state. process is then placed in the ready queue. Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism Semaphore implementation Insert process to Get process from Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism Synchronization example running P1 running P2 running P3 t Semaphore S S.Value = 1 S.Ptr NULL Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism Synchronization example running P1 P1→P(S) running P2 running P3 t Semaphore S S.Value = 0 S.Ptr NULL Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism Synchronization example running P1 P1→P(S) block P2 P2→P(S) running P3 t Semaphore S S.Value = -1 S.Ptr PCB2 Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism Synchronization example running P1 P1→P(S) block P2 P2→P(S) block P3 P3→P(S) t Semaphore S S.Value = -2 S.Ptr PCB2 PCB3 Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism Synchronization example running P1 P1→P(S) P1→V(S) running P2 P2→P(S) block P3 P3→P(S) t Semaphore S S.Value = -1 S.Ptr PCB3 Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism Synchronization example running P1 P1→P(S) P1→V(S) running P2 P2→P(S) P2→V(S) running P3 P3→P(S) t Semaphore S S.Value = 0 S.Ptr NULL Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism Synchronization example running P1 P1→P(S) P1→V(S) running P2 P2→P(S) P2→V(S) running P3 P3→P(S) P3→V(S) t Semaphore S S.Value = 1 S.Ptr NULL Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism Synchronization example running P1 P1→P(S) P1→V(S) running P2 P2→P(S) P2→V(S) running P3 P3→P(S) P3→V(S) t Semaphore S S.Value = 1 S.Ptr NULL Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism Easy to apply for complex system No busy waiting Remark The effectiveness is dependent on user P(S) V(S) P(S) {Critical section} {Critical section} {Critical section} V(S) P(S) P(S) Correct Wrong order Wrong command synchronize Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism ⚫P(S) and V(S) is nonshareable ⇒P(S) and V(S) are 2 critical resource Remark ⇒Need synchronization ⚫Uniprocessor system: Forbid interrupt when perform wait(), signal() ⚫Multiprocessor system ⚫Not possible to forbid interrupt on other processors ⚫Use variable lock method ⇒ busy waiting, however waiting time is short (10 commands) Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism ⚫CreateSemaphore(...) : Create a Semaphore Semaphore object in WIN32 API LPSECURITY_ATTRIBUTES lpSemaphoreAttributes ⇒ pointer to a SECURITY_ATTRIBUTES structure, handle can be inherited? LONG InitialCount, ⇒ initial count for Semaphore object LONG MaximumCount, ⇒ maximum count for Semaphore object LPCTSTR lpName ⇒ Name of Semaphore object Example: CreateSemaphore(NULL,0,1,NULL); Return HANDLE of Semaphore object or NULL ⚫WaitForSingleObject(HANDLE h, DWORD time) ⚫ReleaseSemaphore (...) HANDLE hSemaphore, ⇐ handle for a Semaphore object LONG lReleaseCount, ⇐ increase semaphore object’s current count LPLONG lpPreviousCount ⇐ pointer to a variable to receive the previous count ⚫Example: ReleaseSemaphore(S, 1, NULL); Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism Example Chapter 2 Process Management 4. Critical resource and process synchronization 4.4. Semaphore mechanism Example Chapter 2 Process Management 4. Critical resource and process synchronization ⚫Critical resource ⚫Variable lock method ⚫Test and Set method ⚫Semaphore mechanism ⚫Process synchronization examples ⚫Monitor Chapter 2 Process Management 4. Critical resource and process synchronization 4.5. Process synchronization examples ⚫Producer-Consumer problem ⚫Dining Philosophers problem ⚫Readers-Writers ⚫Sleeping Barber ⚫Bathroom Problem Chapter 2 Process Management 4. Critical resource and process synchronization 4.5. Process synchronization examples Producer-consumer problem while(1){ while(1) { while(Counter == 0) ; while (Counter == BUFFER_SIZE) ; nextConsumed = Buffer[OUT]; OUT =(OUT + 1) % BUFFER_SIZE; Buffer[IN] = nextProduced; Counter--; Counter++; } } Producer Consumer Chapter 2 Process Management 4. Critical resource and process synchronization 4.5. Process synchronization examples Producer-Consumer problem while(1){ while(1) { if(Counter == 0) block(); if(Counter==SIZE) block(); nextConsumed = Buffer[OUT]; OUT =(OUT + 1) % BUFFER_SIZE; Buffer[IN] = nextProduced; Counter--; IN = (IN + 1) % BUFFER_SIZE; if(Counter==SIZE-1) wakeup(Producer); Counter++; if(Counter==1) wakeup(Consumer); } } Producer Consumer Chapter 2 Process Management 4. Critical resource and process synchronization 4.5. Process synchronization examples Producer-Consumer problem Solution: Utilize 1 semaphore Mutex to synchronize variable Counter Initialization: Mutex = 1 while(1){ while(1) { if(Counter == 0) block(); if(Counter==SIZE) block(); nextConsumed = Buffer[OUT]; OUT =(OUT + 1) % BUFFER_SIZE; Buffer[IN] = nextProduced; Counter--; IN = (IN + 1) % BUFFER_SIZE; if(Counter==SIZE-1) wakeup(Producer); Counter++; if(Counter==1) wakeup(Consumer); } } Producer Consumer Problem: Assume Counter=0 Consumer check counter => call block() Producer increase counter by 1 and call wakeup(Consumer) Consumer not blocked yet => wakeup() is skipped Chapter 2 Process Management 4. Critical resource and process synchronization 4.5. Process synchronization examples Producer-Consumer problem Solution 2: Utilize 2 semaphore full, empty. Initialization: full  0 : Number of item in buffer emptyBUFFER_SIZE: Number of empty slot in buffer do{ do{ {Create new product} wait(full); wait(empty); {Take out 1 product from Buffer} {Put new product into Buffer} signal(empty); signal(full); {Consume product} } while (1); } while (1); Producer Consumer Chapter 2 Process Management 4. Critical resource and process synchronization 4.5. Process synchronization examples Readers and Writers problem Many Reader processes access the database at the same time Several Writer processes update the database Allow unlimited Readers to access the database 1 Reader process is accessing the database, new Reader process can access the database (Writers have to stay in waiting queue) Allow only 1 Writer process to update the database at a time Non-preemptive problem. Process stays inside critical section without being interrupted Chapter 2 Process Management 4. Critical resource and process synchronization 4.5. Process synchronization examples Sleeping barber N waiting chair for client Barber can cut for one client at a time No client, barber go to sleep When client comes If barber is sleeping ⇒ wake him up If barber is working Empty chair exists ⇒ sit and wait No empty chair left ⇒ Go away Chapter 2 Process Management 4. Critical resource and process synchronization 4.5. Process synchronization examples Bathroom Problem A bathroom is to be used by both men and women, but not at the same time If the bathroom is empty, then anyone can enter If the bathroom is occupied, then only a person of the same sex as the occupant(s) may enter The number of people that may be in the bathroom at the same time is limited ⚫ Problem implementation require to satisfy constraints ⚫ 2 types of process: male() và female() ⚫ Each process enter the Bathroom in a random period of time Chapter 2 Process Management 4. Critical resource and process synchronization 4.5. Process synchronization examples Dining philosopher problem Classical synchronization problem, show the situation where many processes share resources 5 philosophers having dinner at a round table In front each person is a disk of spaghetti Between two disk is a fork Philosopher do 2 things :Eat and Think Each person need two forks to eat Take only one fork at a time Take the left fork then the right fork Finish eating, return the fork to original place Chapter 2 Process Management 4. Critical resource and process synchronization 4.5. Process synchronization examples Dining philosopher problem: Simple method Each fork is a critical resource, synchronized by a semaphore fork[i] Semaphore fork = {1, 1, 1, 1, 1}; Algorithm for philosopher Pi {Eat} {Thinks} If all the philosophers want to eat Take the left fork (call to: wait(fork[i])) Wait for the right fork (call to: wait(fork[(i+1)%5])) ⇒ deadlock Chapter 2 Process Management 4. Critical resource and process synchronization 4.5. Process synchronization examples Dining philosopher problem – Solution 1 Allow only one philosopher to take the fork at a time Semaphore mutex ← 1; Algorithm for philosopher Pi {Eat} {Thinks} It’s possible to allow 2 non-close philosopher to eat at a time (P1: eats, P2: owns mutex⇒ P3 waits) Chapter 2 Process Management 4. Critical resource and process synchronization 4.5. Process synchronization examples Dining philosopher problem – Solution 1 Philosopher take the forks with different order Even id philosopher take the even id fork first Odd id philosopher take the odd id fork first {Eat} {Thinks} ⚫ Solve the deadlock problem Chapter 2 Process Management 4. Critical resource and process synchronization 4.5. Process synchronization examples Demonstration Chapter 2 Process Management 4. Critical resource and process synchronization 4.5. Process synchronization examples True problem ? Chapter 2 Process Management 4. Critical resource and process synchronization ⚫Critical resource ⚫Variable lock method ⚫Test and Set method ⚫Semaphore mechanism ⚫Process synchronization examples ⚫Monitor Chapter 2 Process Management 4. Critical resource and process synchronization 4.6 Monitor Introduction Special data type, proposed by HOARE 1974 Sharing variables declarations Combines of procedures, local data, initialization code Process can only access variables via procedures of Monitor Only one process can work with Monitor at a time Other processes have to wait Allow process to wait inside Monitor Utilize condition variable Initialization code Monitor’s syntax Chapter 2 Process Management 4. Critical resource and process synchronization 4.6 Monitor Model Chapter 2 Process Management 4. Critical resource and process synchronization 4.6 Monitor Condition Variable Actually, name of a queue Declare: condition x,y; Only used by 2 operations wait() Called by Monitor’s procedures (syntax x.wait() or wait(x)). Allow process to be blocked until activated by other process via signal() procedure signal() Called by Monitor’s procedures (syntax x.signal() or signal(x)). Activate a process waiting at x variable queue. If no waiting process then the operation is skipped Chapter 2 Process Management 4. Critical resource and process synchronization 4.6 Monitor Model Chapter 2 Process Management 4. Critical resource and process synchronization 4.6 Monitor Monitor’s usage: sharing a resource Process’s structure User’s partt Using resource Initialization part Chapter 2 Process Management 4. Critical resource and process synchronization 4.6 Monitor Producer – Consumer problem Item = New product Put Item into Buffer {Take Item from Buffer} {Use Item} Chapter 2 Process Management ① Process ② Thread ③ CPU scheduling ④ Critical resource and process synchronization ⑤ Deadlock and solutions Chapter 2 Process Management 5.Dead lock and solutions 5.1. Deadlock conception ⚫ Deadlock conception ⚫ Conditions for resource deadlocks ⚫ Solutions for deadlock ⚫ Deadlock prevention ⚫ Deadlock avoidance ⚫ Deadlock detection and recovery Chapter 2 Process Management 5.Dead lock and solutions 5.1. Deadlock conception Deadlock conception System combines of concurrently running processes, sharing resources Resources have different types (e.g.: CPU, memory,..). Each type of resource may has many unit (e.g.: 2 CPUs, 5 printers..) Each process is combines of sequences of continuous operations Require resource: if resource is not available (being used by other processes) ⇒ process has to wait Utilize resource as required (printing, input data...) Release allocated resources When processes share at least 2 resources, system may “in danger” Chapter 2 Process Management 5.Dead lock and solutions 5.1. Deadlock conception Deadlock conception ⚫ Example: Two processes in the system P1 & P2 P1 & P2 share 2 resources R1 & R2 R1 is synchronized by semaphore S1 (S1 ← 1) R2 is synchronized by semaphore S2 (S2 ← 1) Code for P1 and P2 P(S1) P(S2) { Use R1&R2 } V(S1) V(S2) Chapter 2 Process Management 5.Dead lock and solutions 5.1. Deadlock conception Example Process P1 Process P1 Process P2 P(S1) P(S2) { Use R1&R2 } V(S1) V(S2) Process P2 P(S1) P(S2) { Use R1&R2 } V(S1) t V(S2) S1 = 1 S2 = 1 Chapter 2 Process Management 5.Dead lock and solutions 5.1. Deadlock conception Example Process P1 Process P1 Process P2 P(S1) P(S1) P(S2) { Use R1&R2 } V(S1) V(S2) Process P2 P(S1) P(S2) { Use R1&R2 } V(S1) t V(S2) S1 = 0 S2 = 1 Chapter 2 Process Management 5.Dead lock and solutions 5.1. Deadlock conception Example Process P1 Process P1 Process P2 P(S1) P(S1) P(S2) { Use R1&R2 } P(S1) P2 block() V(S1) Enter queue for R1 V(S2) Process P2 P(S1) P(S2) { Use R1&R2 } V(S1) t V(S2) S1 = -1 S2 = 1 Chapter 2 Process Management 5.Dead lock and solutions 5.1. Deadlock conception Example Process P1 Process P1 Process P2 P(S1) P(S1) P(S2) { Use R1&R2 } P(S1) P2 block() V(S1) P(S2) Enter queue for R1 V(S2) Process P2 P(S1) P(S2) { Use R1&R2 } V(S1) t V(S2) S1 = -1 S2 = 0 Chapter 2 Process Management 5.Dead lock and solutions 5.1. Deadlock conception Example Process P1 Process P1 Process P2 P(S1) P(S1) P(S2) { Use R1&R2 } P(S1) V(S1) P(S2) V(S2) Use R1&R2 Process P2 V(S1) wakeup(P2) P(S1) P(S2) { Use R1&R2 } V(S1) t V(S2) S1 = 0 S2 = 0 Chapter 2 Process Management 5.Dead lock and solutions 5.1. Deadlock conception Example Process P1 Process P1 Process P2 P(S1) P(S1) P(S2) { Use R1&R2 } P(S1) V(S1) P(S2) V(S2) Use R1&R2 Process P2 V(S1) P(S2) wakeup(P2) P(S1) P2 block() P(S2) Enter queue for R2 { Use R1&R2 } V(S1) t V(S2) S1 = 0 S2 = -1 Chapter 2 Process Management 5.Dead lock and solutions 5.1. Deadlock conception Example Process P1 Process P1 Process P2 P(S1) P(S1) P(S2) { Use R1&R2 } P(S1) V(S1) P(S2) V(S2) Use R1&R2 Process P2 V(S1) P(S2) P(S1) V(S1) P(S2) wakeup(P2) { Use R1&R2 } V(S1) t V(S2) S1 = 0 S2 = 0 Chapter 2 Process Management 5.Dead lock and solutions 5.1. Deadlock conception Example Process P1 Process P1 Process P2 P(S1) P(S1) P(S2) { Use R1&R2 }

Use Quizgecko on...
Browser
Browser