CSE_S_IV_Unit_01_CSE_S_IV_Unit_01_Slow_Learners_Notes.pdf

Full Transcript

Unit 1: Introduction To OS 1.1)Operating System Defination: An Operating System (OS) is an interface between computer user and computer hardware. An operating system is software which performs all the basic tasks like file management, memory management, process management, handling input and output,...

Unit 1: Introduction To OS 1.1)Operating System Defination: An Operating System (OS) is an interface between computer user and computer hardware. An operating system is software which performs all the basic tasks like file management, memory management, process management, handling input and output, and controlling peripheral devices such as disk drives and printers. An operating system is a program that acts as an interface between the user and the computer hardware and controls the execution of all kinds of programs. Following are some of important functions of an operating System. 1)Memory Management 2)Processor Management 3)Device Management 4|)File Management 5)Security 6)Control over system performance 7)Job accounting 8)Error detecting aids 9)Coordination between other software and users. Other Important Activities: Following are some of the important activities that an Operating System performs − Security − By means of password and similar other techniques, it prevents unauthorized access to programs and data. Control over system performance − Recording delays between request for a service and response from the system. Job accounting − Keeping track of time and resources used by various jobs and users. Error detecting aids − Production of dumps, traces, error messages, and other debugging and error detecting aids. Coordination between other software’s and users − Coordination and assignment of compilers, interpreters, assemblers and other software to the various users of the computer systems. 1.2) Operating System Generations (Evolution): Operating systems have been evolving over the years. We can categorise this evaluation based on different generations which is briefed below: 0th Generation: The term 0th generation is used to refer to the period of development of computing when Charles Babbage invented the Analytical Engine and later John Atanasoff created a computer in 1940. The hardware component technology of this period was electronic vacuum tubes. There was no Operating System available for this generation computer and computer programs were written in machine language. First Generation (1951-1956): The first generation marked the beginning of commercial computing including the introduction of Eckert and Mauchly’s UNIVAC I in early 1951, and a bit later, the IBM 701. System operation was performed with the help of expert operators and without the benefit of an operating system for a time though programs began to be written in higher level, procedure-oriented languages, and thus the operator’s routine expanded. Programming language like FORTRAN was developed by John W. Backus in 1956. Second Generation (1956-1964): The second generation of computer hardware was most notably characterised by transistors replacing vacuum tubes as the hardware component technology. The first operating system GMOS was developed by the IBM computer. GMOS was based on single stream batch processing system. Third Generation (1964-1979): The third generation officially began in April 1964 with IBM’s announcement of its System/360 family of computers. Hardware technology began to use integrated circuits (ICs) which yielded significant advantages in both speed and economy. Fourth Generation (1979 – Present): The fourth generation is characterised by the appearance of the personal computer and the workstation. The component technology of the third generation was replaced by very large scale integration (VLSI). Many Operating Systems which we are using today like Windows, Linux, MacOS etc developed in the fourth generation. 1.3) Components of Operating System: There are various components of an Operating System to perform well defined tasks. Though most of the Operating Systems differ in structure but logically they have similar components. Each component must be a welldefined portion of a system that appropriately describes the functions, inputs, and outputs. There are following 8-components of an Operating System: 1.Process Management 2.I/O Device Management 3.File Management 4.Network Management 5.Main Memory Management 6.Secondary Storage Management 7.Security Management 8.Command Interpreter System 1]Process Management: A process is program or a fraction of a program that is loaded in main memory. A process needs certain resources including CPU time, Memory, Files, and I/O devices to accomplish its task. The process management component manages the multiple processes running simultaneously on the Operating System. “A program in running state is called a process.” The operating system is responsible for the following activities in connection with process management: Create, load, execute, suspend, resume, and terminate processes. Switch system among multiple processes in main memory. Provides communication mechanisms so that processes can communicate with each others Provides synchronization mechanisms to control concurrent access to shared data to keep shared data consistent. Allocate/de-allocate resources properly to prevent or avoid deadlock situation. 2]I/O Device Management: One of the purposes of an operating system is to hide the peculiarities of specific hardware devices from the user. I/O Device Management provides an abstract level of H/W devices and keep the details from applications to ensure proper use of devices, to prevent errors, and to provide users with convenient and efficient programming environment. Following are the tasks of I/O Device Management component: Hide the details of H/W devices Manage main memory for the devices using cache, buffer, and spooling Maintain and provide custom drivers for each device. 3] File Management: “A files is a sequence of bits, bytes, lines or records whose meaning is defined by its creator and user.” File management is one of the most visible services of an operating system. Computers can store information in several different physical forms; magnetic tape, disk, and drum are the most common forms. A file is defined as a set of correlated information and it is defined by the creator of the file. Mostly files represent data, source and object forms, and programs. Data files can be of any type like alphabetic, numeric, and alphanumeric. The operating system is responsible for the following activities in connection with file management: File creation and deletion Directory creation and deletion The support of primitives for manipulating files and directories Mapping files onto secondary storage File backup on stable (nonvolatile) storage media. 4] Network Management: “Network management is the process of keeping your network healthy for an efficient communication between different computers.” Network management is the process of managing and administering a computer network. A computer network is a collection of various types of computers connected with each other. Network management comprises fault analysis, maintaining the quality of service, provisioning of networks, and performance management. Following are the features of network management: Network administration Network maintenance Network operation Network provisioning Network security 5] Main Memory Management: “The main motivation behind Memory Management is to maximize memory utilization on the computer system.” Memory is a large array of words or bytes, each with its own address. It is a repository of quickly accessible data shared by the CPU and I/O devices. Main memory is a volatile storage device which means it loses its contents in the case of system failure or as soon as system power goes down. 6] Secondary Storage Management: The main purpose of a computer system is to execute programs. These programs, together with the data they access, must be in main memory during execution. Since the main memory is too small to permanently accommodate all data and program, the computer system must provide secondary storage to backup main memory. Most programs, like compilers, assemblers, sort routines, editors, formatters, and so on, are stored on the disk until loaded into memory, and then use the disk as both the source and destination of their processing. The operating system is responsible for the following activities in connection with disk management: Free space management Storage allocation Disk scheduling 7] Security Management: “Security Management refers to a mechanism for controlling the access of programs, processes, or users to the resources defined by a computer controls to be imposed, together with some means of enforcement.” The operating system is primarily responsible for all task and activities happen in the computer system. Various mechanisms which can be used to ensure that the files, memory segment, cpu and other resources can be operated on only by those processes that have gained proper authorization from the operating system. 8] Command Interpreter System: “Command Interpreter System allows human users to interact with the Operating System and provides convenient programming environment to the users.” One of the most important component of an operating system is its command interpreter. The command interpreter is the primary interface between the user and the rest of the system. 1.4) Operating System – Services: An Operating System provides services to both the users and to the programs. It provides programs an environment to execute. It provides users the services to execute the programs in a convenient manner. Following are a few common services provided by an operating system − Program execution I/O operations File System manipulation Communication Error Detection Resource Allocation Protection 1] Program execution: Operating systems handle many kinds of activities from user programs to system programs like printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a process. A process includes the complete execution context (code to execute, data to manipulate, registers, OS resources in use). Following are the major activities of an operating system with respect to program management − Loads a program into memory. Executes the program. Handles program's execution. Provides a mechanism for process synchronization. Provids a mechanism for process communication. Provides a mechanism for deadlock handling. 2] I/O Operation: An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers hide the peculiarities of specific hardware devices from the users. An Operating System manages the communication between user and device drivers. I/O operation means read or write operation with any file or any specific I/O device. Operating system provides the access to the required I/O device when required. 3] File system manipulation: A file represents a collection of related information. Computers can store files on the disk (secondary storage), for long-term storage purpose. Examples of storage media include magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has its own properties like speed, capacity, data transfer rate and data access methods. Following are the major activities of an operating system with respect to file management − Program needs to read a file or write a file. The operating system gives the permission to the program for operation on file. Permission varies from read-only, read-write, denied and so on. Operating System provides an interface to the user to create/delete files. Operating System provides an interface to the user to create/delete directories. Operating System provides an interface to create the backup of file system. 4]Communication: Multiple processes communicate with one another through communication lines in the network. The OS handles routing and connection strategies, and the problems of contention and security. Following are the major activities of an operating system with respect to communication − Two processes often require data to be transferred between them Both the processes can be on one computer or on different computers, but are connected through a computer network. Communication may be implemented by two methods, either by Shared Memory or by Message Passing. 5] Error handling: Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the memory hardware. Following are the major activities of an operating system with respect to error handling − The OS constantly checks for possible errors. The OS takes an appropriate action to ensure correct and consistent computing. 6] Resource Management: In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles and files storage are to be allocated to each user or job. Following are the major activities of an operating system with respect to resource management − The OS manages all kinds of resources using schedulers. CPU scheduling algorithms are used for better utilization of CPU. 7] Protection: Considering a computer system having multiple users and concurrent execution of multiple processes, the various processes must be protected from each other's activities. Protection refers to a mechanism or a way to control the access of programs, processes, or users to the resources defined by a computer system. 1.4) Operating System – Services: An Operating System provides services to both the users and to the programs. It provides programs an environment to execute. It provides users the services to execute the programs in a convenient manner. Following are a few common services provided by an operating system − Program execution I/O operations File System manipulation Communication Error Detection Resource Allocation Protection 1] Program execution: Operating systems handle many kinds of activities from user programs to system programs like printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a process. A process includes the complete execution context (code to execute, data to manipulate, registers, OS resources in use). Following are the major activities of an operating system with respect to program management − Loads a program into memory. Executes the program. Handles program's execution. Provides a mechanism for process synchronization. Provides a mechanism for process communication. Provides a mechanism for deadlock handling. 2] I/O Operation: An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers hide the peculiarities of specific hardware devices from the users. An Operating System manages the communication between user and device drivers. I/O operation means read or write operation with any file or any specific I/O device. Operating system provides the access to the required I/O device when required. 3] File system manipulation: A file represents a collection of related information. Computers can store files on the disk (secondary storage), for long-term storage purpose. Examples of storage media include magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has its own properties like speed, capacity, data transfer rate and data access methods. Following are the major activities of an operating system with respect to file management − Program needs to read a file or write a file. The operating system gives the permission to the program for operation on file. Permission varies from read-only, read-write, denied and so on. Operating System provides an interface to the user to create/delete files. Operating System provides an interface to the user to create/delete directories. Operating System provides an interface to create the backup of file system. 4]Communication: Multiple processes communicate with one another through communication lines in the network. The OS handles routing and connection strategies, and the problems of contention and security. Following are the major activities of an operating system with respect to communication − Two processes often require data to be transferred between them Both the processes can be on one computer or on different computers, but are connected through a computer network. Communication may be implemented by two methods, either by Shared Memory or by Message Passing. 5] Error handling: Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the memory hardware. Following are the major activities of an operating system with respect to error handling − The OS constantly checks for possible errors. The OS takes an appropriate action to ensure correct and consistent computing. 6] Resource Management: In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles and files storage are to be allocated to each user or job. Following are the major activities of an operating system with respect to resource management − The OS manages all kinds of resources using schedulers. CPU scheduling algorithms are used for better utilization of CPU. 7] Protection: Considering a computer system having multiple users and concurrent execution of multiple processes, the various processes must be protected from each other's activities. Protection refers to a mechanism or a way to control the access of programs, processes, or users to the resources defined by a computer system. Following are the major activities of an operating system with respect to protection – The OS ensures that all access to system resources is controlled. The OS ensures that external I/O devices are protected from invalid access attempts. The OS provides authentication features for each user by means of passwords. Following are the major activities of an operating system with respect to protection – The OS ensures that all access to system resources is controlled. The OS ensures that external I/O devices are protected from invalid access attempts. The OS provides authentication features for each user by means of passwords. 1.5) Operating System - Processes 1.5.1] Process: A process is basically a program in execution. The execution of a process must progress in a sequential fashion. “A process is defined as an entity which represents the basic unit of work to be implemented in the system.” To put it in simple terms, we write our computer programs in a text file and when we execute this program, it becomes a process which performs all the tasks mentioned in the program. When a program is loaded into the memory and it becomes a process, it can be divided into four sections ─ stack, heap, text and data. Component & Description Stack: The process Stack contains the temporary data such as method/function parameters, return address and local variables. Heap: This is dynamically allocated memory to a process during its run time. Text: This includes the current activity represented by the value of Program Counter and the contents of the processor's registers. Data: This section contains the global and static variables. 1.5.2] Process Life Cycle: When a process executes, it passes through different states. These stages may differ in different operating systems, and the names of these states are also not standardized. In general, a process can have one of the following five states at a time. State & Description 1]Start: This is the initial state when a process is first started/created. 2]Ready: The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor allocated to them by the operating system so that they can run. Process may come into this state after Start state or while running it by but interrupted by the scheduler to assign CPU to some other process. 3]Running: Once the process has been assigned to a processor by the OS scheduler, the process state is set to running and the processor executes its instructions. 4]Waiting: Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input, or waiting for a file to become available. 5]Terminated or Exit: Once the process finishes its execution, or it is terminated by the operating system, it is moved to the terminated state where it waits to be removed from main memory. @@Process Control Block (PCB) A Process Control Block is a data structure maintained by the Operating System for every process. The PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to keep track of a process as listed below in the table − Information & Description Process State : The current state of the process i.e., whether it is ready, running, waiting, or whatever. Process privileges : This is required to allow/disallow access to system resources. Process ID : Unique identification for each of the process in the operating system. Pointer : A pointer to parent process. Program Counter : Program Counter is a pointer to the address of the next instruction to be executed for this process. CPU registers : Various CPU registers where process need to be stored for execution for running state. CPU Scheduling Information : Process priority and other scheduling information which is required to schedule the process. Memory management information : This includes the information of page table, memory limits, Segment table depending on memory used by the operating system. Accounting information: This includes the amount of CPU used for process execution, time limits, execution ID etc. IO status information : This includes a list of I/O devices allocated to the process. 1.6) Operating System - Process Scheduling The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy. Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems allow more than one process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing. 1.6.1] Categories of Scheduling: There are two categories of scheduling: 1.Non-preemptive: Here the resource can’t be taken from a process until the process completes execution. The switching of resources occurs when the running process terminates and moves to a waiting state. 2.Preemptive: Here the OS allocates the resources to a process for a fixed amount of time. During resource allocation, the process switches from running state to ready state or from waiting state to ready state. This switching occurs as the CPU may give priority to other processes and replace the process with higher priority with the running process. 1.6.2] Process Scheduling Queues: The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS maintains a separate queue for each of the process states and PCBs of all processes in the same execution state are placed in the same queue. When the state of a process is changed, its PCB is unlinked from its current queue and moved to its new state queue. The Operating System maintains the following important process scheduling queues − Job queue − This queue keeps all the processes in the system. Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting to execute. A new process is always put in this queue. Device queues − The processes which are blocked due to unavailability of an I/O device constitute this queue. 1.6.3] Two-State Process Model: Two-state process model refers to running and non-running states which are described below − State & Description 1]Running: When a new process is created, it enters into the system as in the running state. 2]Not Running: Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in the queue is a pointer to a particular process. Queue is implemented by using linked list. Use of dispatcher is as follows. When a process is interrupted, that process is transferred in the waiting queue. If the process has completed or aborted, the process is discarded. In either case, the dispatcher then selects a process from the queue to execute. Schedulers: Schedulers are special system software which handle process scheduling in various ways. Their main task is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of three types − Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler 1] Long Term Scheduler: It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the system for processing. It selects processes from the queue and loads them into memory for execution. Process loads into the memory for CPU scheduling. The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is stable, then the average rate of process creation must be equal to the average departure rate of processes leaving the system. 2] Short Term Scheduler: It is also called as CPU scheduler. Its main objective is to increase system performance in accordance with the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler selects a process among the processes that are ready to execute and allocates CPU to one of them. Short-term schedulers, also known as dispatchers, make the decision of which process to execute next. Short-term schedulers are faster than long-term schedulers. 3] Medium Term Scheduler: Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped outprocesses. A running process may become suspended if it makes an I/O request. A suspended processes cannot make any progress towards completion. In this condition, to remove the process from memory and make space for other processes, the suspended process is moved to the secondary storage. This process is called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to improve the process mix. Comparison among Scheduler Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler 1It is a job scheduler It is a CPU scheduler It is a process swapping scheduler. 2Speed is lesser than short term scheduler Speed is fastest among other two Speed is in between both short and long term scheduler. 3It controls the degree of multiprogramming It provides lesser control over degree of multiprogramming It reduces the degree of multiprogramming. 4It is almost absent or minimal in time sharing system It is also minimal in time sharing system It is a part of Time sharing systems. 5It selects processes from pool and loads them into memory for execution It selects those processes which are ready to execute It can re-introduce the process into memory and execution can be continued. 1.7) Operations on Processes There are many operations that can be performed on processes. Some of these are process creation, process preemption, process blocking, and process termination. These are given in detail as follows − 1.7.1] Process Creation: Processes need to be created in the system for different operations. This can be done by the following events − User request for process creation System initialization Execution of a process creation system call by a running process Batch job initialization A process may be created by another process using fork(). The creating process is called the parent process and the created process is the child process. A child process can have only one parent but a parent process may have many children. Both the parent and child processes have the same memory image, open files, and environment strings. However, they have distinct address spaces. 1.7.2] Process Preemption: An interrupt mechanism is used in preemption that suspends the process executing currently and the next process to execute is determined by the short-term scheduler. Preemption makes sure that all processes get some CPU time for execution. 1.7.3] Process Blocking: The process is blocked if it is waiting for some event to occur. This event may be I/O as the I/O events are executed in the main memory and don't require the processor. After the event is complete, the process again goes to the ready state. 1.7.4] Process Termination: After the process has completed the execution of its last instruction, it is terminated. The resources held by a process are released after it is terminated A child process can be terminated by its parent process if its task is no longer relevant. The child process sends its status information to the parent process before it terminates. Also, when a parent process is terminated, its child processes are terminated as well as the child processes cannot run if the parent processes are terminated. 1.8) Cooperating Process Cooperating processes are those that can affect or are affected by other processes running on the system. Cooperating processes may share data with each other. Reasons for needing cooperating processes There may be many reasons for the requirement of cooperating processes. Some of these are given as follows − Modularity Modularity involves dividing complicated tasks into smaller subtasks. These subtasks can completed by different cooperating processes. This leads to faster and more efficient completion of the required tasks. Information Sharing Sharing of information between multiple processes can be accomplished using cooperating processes. This may include access to the same files. A mechanism is required so that the processes can access the files in parallel to each other. Convenience There are many tasks that a user needs to do such as compiling, printing, editing etc. It is convenient if these tasks can be managed by cooperating processes. Computation Speedup Subtasks of a single task can be performed parallely using cooperating processes. This increases the computation speedup as the task can be executed faster. However, this is only possible if the system has multiple processing elements Methods of Cooperation: Cooperating processes can coordinate with each other using shared data or messages. Details about these are given as follows − Cooperation by Sharing The cooperating processes can cooperate with each other using shared data such as memory, variables, files, databases etc. Critical section is used to provide data integrity and writing is mutually exclusive to prevent inconsistent data. Cooperation by Communication The cooperating processes can cooperate with each other using messages. This may lead to deadlock if each process is waiting for a message from the other to perform a operation. Starvation is also possible if a process never receives a message. 1.9) Interprocess Communication: Interprocess communication is the mechanism provided by the operating system that allows processes to communicate with each other. This communication could involve a process letting another process know that some event has occurred or the transferring of data from one process to another. 1.9.1] Synchronization in Interprocess Communication: Synchronization is a necessary part of interprocess communication. It is either provided by the interprocess control mechanism or handled by the communicating processes. Some of the methods to provide synchronization are as follows − Semaphore A semaphore is a variable that controls the access to a common resource by multiple processes. The two types of semaphores are binary semaphores and counting semaphores. Mutual Exclusion Mutual exclusion requires that only one process thread can enter the critical section at a time. This is useful for synchronization and also prevents race conditions. Barrier A barrier does not allow individual processes to proceed until all the processes reach it. Many parallel languages and collective routines impose barriers. Spinlock This is a type of lock. The processes trying to acquire this lock wait in a loop while checking if the lock is available or not. This is known as busy waiting because the process is not doing any useful operation even though it is active. 1.9.2] Approaches to Interprocess Communication: The different approaches to implement interprocess communication are given as follows − Pipe A pipe is a data channel that is unidirectional. Two pipes can be used to create a two-way data channel between two processes. This uses standard input and output methods. Pipes are used in all POSIX systems as well as Windows operating systems. Socket The socket is the endpoint for sending or receiving data in a network. This is true for data sent between processes on the same computer or data sent between different computers on the same network. Most of the operating systems use sockets for interprocess communication. File A file is a data record that may be stored on a disk or acquired on demand by a file server. Multiple processes can access a file as required. All operating systems use files for data storage. Signal Signals are useful in interprocess communication in a limited way. They are system messages that are sent from one process to another. Normally, signals are not used to transfer data but are used for remote commands between processes. Shared Memory Shared memory is the memory that can be simultaneously accessed by multiple processes. This is done so that the processes can communicate with each other. All POSIX systems, as well as Windows operating systems use shared memory. Message Queue Multiple processes can read and write data to the message queue without being connected to each other. Messages are stored in the queue until their recipient retrieves them. Message queues are quite useful for interprocess communication and are used by most operating systems. 1.10) Thread Overview: A thread is a flow of execution through the process code, with its own program counter that keeps track of which instruction to execute next, system registers which hold its current working variables, and a stack which contains the execution history. A thread shares with its peer threads few information like code segment, data segment and open files. When one thread alters a code segment memory item, all other threads see that. A thread is also called a lightweight process. Threads provide a way to improve application performance through parallelism. Threads represent a software approach to improving performance of operating system by reducing the overhead thread is equivalent to a classical process. Advantages of Thread: Threads minimize the context switching time. Use of threads provides concurrency within a process. Efficient communication. It is more economical to create and context switch threads. Threads allow utilization of multiprocessor architectures to a greater scale and efficiency. Types of Thread: Threads are implemented in following two ways − User Level Threads − User managed threads. Kernel Level Threads − Operating System managed threads acting on kernel, an operating system core. 1.11) Multithreading Models Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is a good example of this combined approach. In a combined system, multiple threads within the same application can run in parallel on multiple processors and a blocking system call need not block the entire process. Multithreading models are three types Many to many relationship. Many to one relationship. One to one relationship. 1] Many to Many Model The many-to-many model multiplexes any number of user threads onto an equal or smaller number of kernel threads. The following diagram shows the many-to-many threading model where 6 user level threads are multiplexing with 6 kernel level threads. In this model, developers can create as many user threads as necessary and the corresponding Kernel threads can run in parallel on a multiprocessor machine. This model provides the best accuracy on concurrency and when a thread performs a blocking system call, the kernel can schedule another thread for execution. 2] Many to One Model Many-to-one model maps many user level threads to one Kernel-level thread. Thread management is done in user space by the thread library. When thread makes a blocking system call, the entire process will be blocked. Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel on multiprocessors. If the user-level thread libraries are implemented in the operating system in such a way that the system does not support them, then the Kernel threads use the many-to-one relationship modes. 3] One to One Model There is one-to-one relationship of user-level thread to the kernel-level thread. This model provides more concurrency than the many-to-one model. It also allows another thread to run when a thread makes a blocking system call. It supports multiple threads to execute in parallel on microprocessors. Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship model. Difference between User-Level & Kernel-Level Thread S.N. User-Level Threads Kernel-Level Thread 1]User-level threads are faster to create and manage. Kernel-level threads are slower to create and manage. 2]Implementation is by a thread library at the user level. Operating system supports creation of Kernel threads. 3]User-level thread is generic and can run on any operating system. Kernel-level thread is specific to the operating system. 4]Multi-threaded applications cannot take advantage of multiprocessing. Kernel routines themselves can be multithreaded. 1.12) Threading issues: We can discuss some of the issues to consider in designing multithreaded programs. These issued are as follows − The fork() and exec() system calls The fork() is used to create a duplicate process. The meaning of the fork() and exec() system calls change in a multithreaded program. If one thread in a program which calls fork(), does the new process duplicate all threads, or is the new process single-threaded? If we take, some UNIX systems have chosen to have two versions of fork(), one that duplicates all threads and another that duplicates only the thread that i nvoked the fork() system call. If a thread calls the exec() system call, the program specified in the parameter to exec() will replace the entire process which includes all threads. Signal Handling: Generally, signal is used in UNIX systems to notify a process that a particular event has occurred. A signal received either synchronously or asynchronously, based on the source of and the reason for the event being signalled. All signals, whether synchronous or asynchronous, follow the same pattern as given below − A signal is generated by the occurrence of a particular event. The signal is delivered to a process. Once delivered, the signal must be handled. Cancellation: Thread cancellation is the task of terminating a thread before it has completed. For example − If multiple database threads are concurrently searching through a databaseand one thread returns the result the remaining threads might be cancelled. A target thread is a thread that is to be cancelled, cancellation of target thread may occur in two different scenarios − Asynchronous cancellation − One thread immediately terminates the target thread. Deferred cancellation − The target thread periodically checks whether it should terminate, allowing it an opportunity to terminate itself in an ordinary fashion. Thread polls: Multithreading in a web server, whenever the server receives a request it creates a separate thread to service the request. Some of the problems that arise in creating a thread are as follows − The amount of time required to create the thread prior to serving the request together with the fact that this thread will be discarded once it has completed its work. If all concurrent requests are allowed to be serviced in a new thread, there is no bound on the number of threads concurrently active in the system. Unlimited thread could exhaust system resources like CPU time or memory. A thread pool is to create a number of threads at process start-up and place them into a pool, where they sit and wait for work. 1.13) Threads in Java: In Java, a thread is the course or path followed while a program is being run. All programs typically have at least one main thread, which is provided by the JVM or Java Virtual Machine at the beginning of the program's execution. The main () method is now called by the main thread when it is given control of the main thread. A program's execution thread is known as a thread. A program running on the Java Virtual Machine can execute several threads at once. Every thread has a different priority. Priority order determines which threads are run first. Because it allows for the execution of several actions within a single function, thread is essential to the program. The program counter, stack, and local variables are frequently unique to each thread. Java offers two different techniques to construct a thread − Extending java.lang.class of threads. Runnable interface implementation. Extending Java.lang.class This a new class extends the Thread class, an instance of that class is generated, and this results in the creation of a thread. The functionality that should be carried out by the Thread is included in the run () method. A thread is made runnable by using start () once it has been created. In the void run () procedure, a new thread is started. Implementing Runnable Interface This is the simplest way to create a thread. In this instance, a class is developed to implement the runnable interface and the run () method. Always the thread-running code is written inside the run () method. The void run () method is invoked by the start () method. A new stack is provided to the thread when start () is called and run () is then called to introduce the thread into the program.

Use Quizgecko on...
Browser
Browser