AM2020 - Module 4 - Operating Systems Test PDF
Document Details
Uploaded by Deleted User
2020
Ms.Claire Sciberras
Tags
Summary
This document is module 4 of an operating system course. It describes different types of operating systems, including single-user, multiprogramming/multitasking, and batch processing systems. The provided text explains the basics of operating systems and their functions.
Full Transcript
The following notes have been compiled using various online sources as well as a number of books. Students are encouraged to keep researching and summarising notes to help in their studies. Module 4 Operating Systems Ms.Claire Sciberras AM2020...
The following notes have been compiled using various online sources as well as a number of books. Students are encouraged to keep researching and summarising notes to help in their studies. Module 4 Operating Systems Ms.Claire Sciberras AM2020 Operating Systems Operating Systems 1. Introducing Operating Systems In the very first stages of the development of computers, control of these devices was extremely tedious and hard to manage and control. Each machine code program used to be keyed in manually and initially there weren’t even any computer screens! The output was usually presented through the form of lights or possibly on simple data which was typed out on a teletype which was an old style, mechanical typewriter machine. As electronic hardware improved, computer speed increased however the intervention of human operators still slowed down the processing time considerably. Based on this problem, an automatic system was required which would eliminate the human dependency on the system and this brought about the advent for the creation of an operating system. Software systems can be divided into two main categories: 1._____________________________ 2._____________________________ consists of low- level programs that interact with the computer at a very basic level. This includes: Page 1 of 67 AM2020 Operating Systems This type of software manages functions which are required to coordinate different segments of the computers’ components. It manages resources, allocates them, provides the interfaces with the users etc… {ref: http://www.webopedia.com/TERM/A/application.html} In contrast, includes database programs, word processors, and spreadsheets. Figuratively speaking, applications software sits on top of systems software because it is unable to run without the operating system. This type of software is also known as the end-user system software. 1.1 A Perspective for the Operating System The Operating System (OS) in concerned with the interfacing of the computer architecture directly. You can think of the OS as being wrapped around the hardware of the computer system. Regular end-users, usually do not interact with the OS directly, but rather they do so through the use of application packages such as spreadsheet programs, or database programs and so on. Internally the application package being used by the user would translate the users I/O into commands which are understood by the OS. Following, the OS then translate these commands (through the use of machine code routines contained within the OS) into a form which the hardware connected to the computer understands. The application packages can be seen as another layer placed between the end user and the OS. The onion-ring diagram representation, shown below, is generally used to provide a diagrammatic representation of the components of the computers OS divisions. This is so because it resembles the peeling off of the onion layers to get to the core. Page 2 of 67 AM2020 Operating Systems Kernel (innermost ring) Page 3 of 67 AM2020 Operating Systems Utilities The Operating System can be said to contain all those routines which are regularly used in the form of detailed step-by-step instructions which are used to manage the entire memory and peripherals etc… of the computer 1.2 Getting Started The process of getting the computer stared up is known as booting the system. Whenever a computer is switched on, the bootstrap is used to perform a sequence of events, such as, checking that all the hardware is connected and search for start up instructions. The bootstrap is resident within ROM and it is need to make sure that other parts of the Operating System are loaded into memory and get started up properly. A Bootstrap loader is a loader i.e. a type of program which has a very basic function i.e. it is a program which once you give it electricity will run on itself and in the same way. Its scope is to run and call a sequence of other programs. It calls a sequence of actions to be taken one after another and then the OS takes over. Bootstrap loader is in ROM and it, for example, calls the BIOS and can then calls the Kernel (the lowest level layer of the OS containing the most imp things because if something happens to the foundations of the OS you will have trouble – it is the closest to the hardware). Page 4 of 67 AM2020 Operating Systems What is the BIOS? 1.3 Main objectives of an Operating System Some of the main objectives of any Operating System are the following: 1 It acts as an interface between the hardware the program requesting I/O. 2 Handling of interrupts generated by I/O devices. 3 Enabling of sharing of I/O devices amongst many programs using the CPU. 4 Assigns priorities to tasks to be executed, as well as priorities for the use of devices. 5 Auditing – properties to keep logs of events when the computer is switched on (system log). 6 Management of main memory to ensure efficiency. Page 5 of 67 AM2020 Operating Systems 2. Different Types of Operating Systems The different types of Operating Systems available are distinguishable upon the variations in the ways that they operate. The main types of Operating Systems are the following: 1. Single-User Systems 2. Multiprogramming/Multitasking System 3. Batch Processing Systems 4. Time-Sharing/Multi-Access Systems 5. Real-Time Systems 6. Multiprocessing/Parallel Systems 7. Distributed/Network Systems 8. Online Systems 2.1 Single User Systems This is the most basic form of Operating System available. The way this operating system functions is that it provides a virtual machine for only one user at a time. Having said this, it does not allow for concurrency and each process needs to finish before next can begin even if there is idle time. Page 6 of 67 AM2020 Operating Systems 2.2 Multiprogramming/Multitasking Systems If more than one job (i.e. user program) is resident within the machine at the same time, then the CPU could switch between jobs while it is idle, for example, when one of the jobs is waiting for a peripheral device. This is known as a multiprogramming operating system. The worst situation for any CPU is idle time, this is when the CPU is simply waiting because, for example, a device is needed by the job being executed which is currently in use by another job. In order to prevent copies of the same application programs, compilers etc… being loaded into the machine when requested by several different users, the same program or application can be shared amongst these users. When this is possible, it is known as a multitasking operating system. Multitasking and Multiprogramming are used interchangeably. The way multiprogramming functions is basically as follows: CPU executes first program. Is an I/O operation required? If yes, CPU transfers control to the execution of a second program from main memory. If no, proceed with processing of current program. If current (second executed program) requires I/O then return to processing first program (or proceed to processing next from within main memory if more are awaiting execution). Graph representation: By using a multiprogramming operating system, the CPU will function much more efficiently since CPU idle time is reduced through switching between program executions. Page 7 of 67 AM2020 Operating Systems Also, programs get executed faster (using less units of time) when compared to each job being executed individually. Since a multiprogramming operating system runs programs concurrently, other functions are required within this OS such as scheduling, memory management as well as system security. 2.3 Batch Processing Systems In a Batch Operating System, batches of programs (referred to as jobs) are submitted together and processed one at a time. The output of one job can only be obtained when the full batch of jobs in terms of processing has been completed and also the user cannot interfere during the execution of the batch (therefore also known as being a non- interactive system). This type of processing system is generally used when there is a great amount of similar executions which are required and for which the delay between the input of the jobs and the actual production of the result is not required immediately (example, a payroll system). A batch operating system must also make use of multitasking properties in order to decide which programs to load into memory and when. To add to this, scheduling, in order to identify when to run the programs, is also required. JCL - Job Control Language This is a special language used within Batch Operating Systems. It is a language which is used to identify when a job starts and when it ends. The program created through the use of the JCL would contain information pertaining to each job such as the job name, compilers required etc… For every job that you submit, you need to tell OS where to find the appropriate input, how to process that input, and what to do with the resulting output. You use job control language (JCL) to convey this information to the OS through a set of statements known as job control statements. While application programmers need some knowledge of JCL, the production control analyst must be highly proficient with JCL, to create, monitor, correct and rerun the company's daily batch workload. Page 8 of 67 AM2020 Operating Systems 2.4 Time-Sharing/Multi-Access Systems In contrast to what is offered by a Batch system, where vast amounts of work gets processed however no feedback can be given instantly, Time-Sharing systems provide more than one user can gain access to the operating system. Each user is allocated a time slice which is the time given by the CPU to each user on the system. The CPU then switches from one job to another and to the end user it is as though the CPU is solely managing his/her processes. Diagrammatic Representation: The more users added to the system, the slower the system becomes (given that the same processing speed is retained) as each user will get a reduced time slice. 2.5 Real Time Operating Systems (RTOS) A Real-Time OS is one which is intended to server real-time application requests. In general, an operating system (OS) is responsible for managing the hardware resources of a computer and hosting applications that run on the computer. An RTOS performs these tasks, but is also specially designed to run applications with very precise timing and a high degree of reliability. This can be especially important in measurement and automation systems where downtime is costly or a program delay could cause a safety hazard. To be considered as Real-Time, an operating system must have a known maximum time for each of the operations that it performs. There are two main types of Real-Time Systems. Page 9 of 67 AM2020 Operating Systems These are: 1. Hard Real-Time 2. Soft Real-Time Operating systems that can absolutely guarantee a maximum time for these operations are referred to as Hard Real-Time, while operating systems that can only guarantee a maximum most of the time are referred to as Soft Real-Time. To fully grasp these concepts, it is helpful to consider an example. Imagine that you are designing an airbag system for a new model of car. In this case, a small error in timing (causing the airbag to deploy too early or too late) could be catastrophic and cause injury. Therefore, a hard real-time system is needed. On the other hand, if you were to design a mobile phone that received streaming video, it may be ok to lose a small amount of data occasionally even though on average it is important to keep up with the video stream. For this application, a soft real-time operating system may suffice. 2.6 Multiprocessing/Parallel Systems In a Multiprocessing or Parallel system rather than having a single CPU available for execution of jobs, more than one CPU is available providing several programs can run concurrently. Multiprocessing systems are much more complicated than single-process systems because the operating system must allocate resources to competing processes in a reasonable manner. In a multiprocessing system, all CPUs may be equal, or some may be reserved for special purposes. A combination of hardware and operating-system software design considerations determine the symmetry (or lack thereof) in a given system. For example, hardware or software considerations may require that only one CPU respond to all hardware interrupts, whereas all other work in the system may be distributed equally among CPUs. Such systems are also refereed to as being tightly-coupled. Here, multiprocessor systems contain multiple CPUs that are connected at the bus level. These CPUs may have Page 10 of 67 AM2020 Operating Systems access to a central shared memory, or may participate in a memory hierarchy with both local and shared memory. These operating system CPUs may also share the computer bus, the clock and even peripherals. 2.7 Distributed/Network Systems Distributed operating systems are a type of Network Operating System (NOS). NOSs exist to: facilitate networking between two or more computers, to operate and improve networks with (e.g. routing). Many Operating Systems have some networking ability, but not all such Operating Systems are Networks Operating Systems. Distributed Operating Systems go beyond most Network Operating Systems. Their main scope is to: divide, distribute, and migrate tasks and information in order to operate over networks i.e. to run on two or more processors (a.k.a. clustering and heterogeneous multiprocessing). Unlike what was described in multiprocessing/parallel systems, Distributed Systems do not share memory or a clock. Rather, each processor connected over the network has its own local memory and these processors are usually found at distant locations from one another where through their network connection, they can communicate with one another through a high speed transmission medium. These operating systems are also referred to as loosely coupled systems. Such Distributed Operating Systems offer the following advantages: 1. 2. 3. Page 11 of 67 AM2020 Operating Systems Process Control Operations A process in an operating system is represented by a data structure known as a process control block (PCB) or process descriptor. The PCB contains important information about the specific process including: The PCB is a certain store that allows the operating systems to locate key information about a process. Thus, the PCB is the data structure that defines a process to the operating systems. Page 12 of 67 AM2020 Operating Systems 3. The Main Functions of an Operating System An Operating System can be said to having the following main functions: 1. Process Control 2. Memory Management 3. Handling of I/O Operations 4. Interrupt Handling 3.1 Process Control Process management is an integral part of any modern day operating system (OS). The OS must: 1. 2. 3. 4. To meet these requirements, the OS must maintain a data structure for each process, which describes the state and resource ownership of that process and which enables the OS to exert control over each process. It is important to note that a process is not a program. A process is only ONE instant of a program in execution. There are many processes can be running the same program. The five major activities of an operating system in regard to process management are Creation and deletion of user and system processes. Suspension and resumption of processes. A mechanism for process synchronization. A mechanism for process communication. A mechanism for deadlock handling. Page 13 of 67 AM2020 Operating Systems As process executes over time it can be doing either of the activities or is in following states: State/Activity Description New Running Waiting/Blocked Ready Terminated Below is a state diagram representation of a Process Activity State. Label correctly each of the representative arrows with the following: Interrupt Admitted Exit I/O or event wait Scheduler dispatch I/O or event completion Below are some possible transitions between each of the state, and what causes a process (or thread) to undertake such a transition: Page 14 of 67 AM2020 Operating Systems READY to RUNNING – It is handled by the Process Scheduler using some predefined algorithm, such as FCFS, SJN, priority scheduling, SRT or round robin, to determine which process will get the CPU, when, and for how long. RUNNING back to READY – It is handled by the Process Scheduler according to some predefined time limit or other criterion, for example, priority interrupts or quantum expired. RUNNING to WAITING – It is handled by the Process Scheduler and is initiated by an instruction in the process such as a command to READ, WRITE or other I/O requests, or a page fetch is required. WAITING to READY – It is handled by the Process Scheduler and is initiated by signal from I/O device manager that I/O request has been satisfied and job can continue. The THREE main process states: Three-state process model is constituted of READY, RUNNING & SUSPENDED. Processes, also known as tasks entering the system must initially go into the READY state before they enter the RUNNING state. Processes normally leave the system from the RUNNING state. For each of the three states, the process occupies space in main memory. In READY state, a process is loaded into main memory by the Job Scheduler which is also known as high-level scheduler, and is awaiting execution on a CPU (to be context switched onto the CPU by the dispatcher, or short-term scheduler). There may be many processes at any one point of the systems execution. For example, in a single processor system, only one process can be executing at any one time, and all other “concurrently executing” processes will be waiting for execution. In multiprogramming system, the processor must be allocated to each job or process in a fair and efficient manner. After a process has been placed in the READY queue, the Process Scheduler takes over. The Process Scheduler is the low-level scheduler that assigns the CPU to execute the processes of those jobs. Modern computers are capable of running many different processes simultaneously. However, the CPU is only capable of handling one process at a time. Processes that are ready for the CPU are kept in a queue for “ready” processes. Based on some predefined algorithm such as First-Come, First-Served (FCFS), Shortest Job Next (SJN), Priority Scheduling, or Round Robin, the process moves from READY state to RUNNING state, in other words, the instruction of process is being executed in RUNNING state. A “running” process is a process which is currently executing on a CPU. From this Page 15 of 67 AM2020 Operating Systems state the process may exceed its allocated time slice and be context switched out and back to READY state by the system, it may indicate that it has finished and be terminated or it. {https://www.louiewong.com/archives/251} 3.1.1 Process Scheduling The assignment of physical CPUs to processes allows processors to accomplish work. Process Scheduling, also known as CPU Scheduling is the process of determining when processors should be assigned and to which processes. When more than one process is waiting to be executed, the operating system must decide which one is to be executed first. The part of the operating system concerned with this decision is called the scheduler, and algorithm it uses is called the scheduling algorithm. The Dispatcher module gives control of the CPU to the process selected by the short term scheduler; this involves: switching context switching to user mode jumping to the proper location in the user program to restart that program Dispatch latency is the time it takes for the dispatcher to stop one process and start another running. Page 16 of 67 AM2020 Operating Systems The Objectives and Goals of scheduling Some of these goals of a scheduler depend on the system one is using for example batch system or real-time system, etc. A scheduler should however also consider the following properties: fairness, efficiency, response time, turnaround time, Page 17 of 67 AM2020 Operating Systems throughput. Preemptive Vs Non-preemptive Scheduling The Scheduling algorithms can be divided into two categories with respect to how they deal with clock interrupts. 1. Preemptive Scheduling 2. Non-preemptive Scheduling Preemptive Scheduling is a discipline which is applied where once a process has been given the CPU this can be taken away to execute another process. The strategy of allowing processes to be temporarily suspended is called Preemptive Scheduling. Non-preemptive Scheduling is a discipline whereby if a process has been given the CPU, the CPU cannot be taken away from that process. Page 18 of 67 AM2020 Operating Systems Following are some characteristics of non-preemptive scheduling: 1. 2. 3. As mentioned above, CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU. Following are some scheduling algorithms we will study. First Come First Served (FCFS) Scheduling Other names of this algorithm are: First-In-First-Out (FIFO) Run-to-Completion Run-Until-Done First-Come-First-Served algorithm is most probably the simplest scheduling algorithm available. Processes are dispatched according to their arrival time on the ready queue. Being a nonpreemptive discipline, once a process has a CPU, it runs to completion. The FCFS scheduling is fair in the formal sense or human sense of fairness but it is unfair in the sense that long jobs make short jobs wait and unimportant jobs make important jobs wait. FCFS is more predictable than most of other schemes since it offers time. FCFS scheme is not useful in scheduling interactive users because it cannot guarantee good response time. One major drawback of this scheme is that the average time is often quite long. Page 19 of 67 AM2020 Operating Systems Shortest Job First (SJF) Scheduling Other name of this algorithm is Shortest-Process-Next (SPN). Shortest-Job-First (SJF) is a non-preemptive discipline in which waiting job (or process) with the smallest estimated run-time-to-completion is run next. In other words, when CPU is available, it is assigned to the process that has smallest next CPU burst. The SJF scheduling is especially appropriate for batch jobs for which the run times are known in advance. Since the SJF scheduling algorithm gives the minimum average time for a given set of processes, it is probably optimal. The SJF algorithm favors short jobs (or processors) at the expense of longer ones. The obvious problem with SJF scheme is that it requires precise knowledge of how long a job or process will run, and this information is not usually available (reliance on user estimates of run times). In the production environment where the same jobs run regularly, it may be possible to provide reasonable estimate of run time, based on the past performance of the process. But in the development environment users rarely know how their program will execute. Like FCFS, SJF is non preemptive therefore, it is not useful in timesharing environment in which reasonable response time must be guaranteed. Round Robin Scheduling The Round Robin Scheduling algorithm is one of the oldest, simplest, fairest and most widely used algorithm. In the round robin scheduling, processes are dispatched in a FIFO (First In First Out) manner but are given a limited amount of CPU time called a time-slice or a quantum. If a process does not complete before its CPU-time expires, the CPU is preempted and given to the next process waiting in a queue. The preempted process is then placed at the back of the ready list. Page 20 of 67 AM2020 Operating Systems Round Robin Scheduling is preemptive (at the end of time-slice) therefore it is effective in time-sharing environments in which the system needs to guarantee reasonable response times for interactive users. The only interesting issue with round robin scheme is the length of the quantum. Setting the quantum too short causes too many context switches and lower the CPU efficiency. On the other hand, setting the quantum too long may cause poor response time. In any event, the average waiting time under round robin scheduling is often quite long. Priority Scheduling Within this type of Scheduling algorithm, a process is assigned a priority, and priority is allowed to run. Equal-Priority processes are scheduled in First Come First Serve (FCFS) order. The shortest-Job-First (SJF) algorithm is a special case of general priority scheduling algorithm. An SJF algorithm is simply a priority algorithm where the priority is the inverse of the (predicted) next CPU burst. That is, the longer the CPU burst, the lower the priority and vice versa. Priority can be defined either: internally or externally. Internally defined priorities use some measurable quantities or qualities to compute priority of a process. Examples of Internal priorities are: 1. 2. 3. 4. Page 21 of 67 AM2020 Operating Systems Externally defined priorities are set by criteria that are external to operating system such as: 1. 2. 3. 4. Priority scheduling can be either preemptive or non-preemptive. A preemptive priority A non-preemptive priority algorithm will simply put algorithm will preemptive the new process at the head of the ready queue. the CPU if the priority of the newly arrival process is higher than the priority of the currently running process. A major problem with priority scheduling is indefinite blocking or starvation. A solution to the problem of indefinite blockage of the low-priority process is aging. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long period of time. Page 22 of 67 AM2020 Operating Systems 3.1.2 Deadlocks “Crises and deadlocks when they occur have at least this advantage, that they force us to think.”- Jawaharlal Nehru (1889 - 1964) Indian political leader A set of processes is in a deadlock state if each process in the set is waiting for an event that can be caused by only another process in the set. In other words, each member of the set of deadlock processes is waiting for a resource that can be released only by a deadlock process. None of the processes can run, none of them can release any resources, and none of them can be awakened. It is important to note that the number of processes and the number and kind of resources possessed and requested are unimportant. A deadlock is a situation where two or more competing actions are each waiting for the other to finish, and thus neither ever does. It is often seen in a paradox like the "chicken or the egg". A deadlock is… Page 23 of 67 AM2020 Operating Systems Preemptable and Nonpreemptable Resources Resources can be: preemptable and non-preemptable. A preemptable resource is one that can be taken away from the process with no ill effects. Memory is an example of a preemptable resource. On the other hand, a non-preemptable resource is one that cannot be taken away from process (without causing ill effect). For example, CD resources are not preemptable at an arbitrary moment. Reallocating resources can resolve deadlocks that involve preemptable resources. Deadlocks that involve non-preemptable resources are difficult to deal with. In order for deadlock to occur, the following four conditions must be true. 1. Mutual exclusion 2. Hold and Wait Page 24 of 67 AM2020 Operating Systems 3. No preemption 4. Circular wait Dealing with the problems of a Deadlock… In general, there are four strategies of dealing with deadlock problem: 1. The Ostrich Approach Just ignore the deadlock problem altogether. 2. Deadlock Detection and Recovery Detect deadlock and, when it occurs, take steps to recover. 3. Deadlock Avoidance Avoid deadlock by careful resource scheduling. 4. Deadlock Prevention Prevent deadlock by resource scheduling so as to negate at least one of the four conditions. Page 25 of 67 AM2020 Operating Systems Deadlock Detection and Recovery Page 26 of 67 AM2020 Operating Systems Deadlock Prevention Since all four of the conditions are necessary for deadlock to occur, it follows that deadlock might be prevented by denying any one of the conditions. Elimination of “Mutual Exclusion” Condition Elimination of “Hold and Wait” Condition Page 27 of 67 AM2020 Operating Systems Elimination of “No-preemption” Condition Elimination of “Circular Wait” Condition Page 28 of 67 AM2020 Operating Systems 3.2 Memory Management By the term Memory Management we refer to the way with which, main memory and its relations to secondary memory, are managed by the Operating System. Primary-Memory or Main-Memory is a large array of words or bytes. Each word or byte has its own address. Main- memory provides storage that can be access directly by t he CPU. That is to say for a program to be executed, it must in the main memory. The major activities of an operating in regard to memory-management are: 1. 2. 3. The basic elements that drive the design of Memory are… 1. 2. 3. Page 29 of 67 AM2020 Operating Systems 3.2.1 The Memory Manager The purpose of the memory manager is: 1. 2. 3. Main Memory is usually divided into two segments: 1. The part of memory which contains the OS kernel which provides services to the programs/processes. 2. The current program/process which is being executed (of course, when using a multiprogramming environment, more than one process could be loaded into memory at each unit of time – depending on the requirements). Note that Memory Management is concerned with the space within main memory that is available for the loading of users’ programs/processes. The section of memory dedicated for the storing of the kernel cannot be managed. Memory Management is the task of storing and managing multiple processes in main memory and this process is carried out by the Memory Management layer/Memory Manager found within the OS. 3.2.2 Memory Partitioning As mentioned before, a process is not a program; rather a program is made up of a number of processes which are loaded into main memory for execution. A partition is a set of contiguous/adjacent locations within physical memory which are designed and implemented to hold a single process. A popular management system is known as the portioning system. Here, memory is divided into what are known as partition i.e. physical areas in memory which can be of different sizes and when you load a program or instruction, it is placed into the main memory depending on this system. Irrespective of which type of partitioning is used, they each have a common set of rules. Page 30 of 67 AM2020 Operating Systems These include: 1. 2. 3. Partitions may be of two main types: 1. Fixed Equally sized or not equally sized 2. Dynamic Page 31 of 67 AM2020 Operating Systems Fixed Partitioning Fixed partitioning is the concept where jobs are loaded into an appropriate partition when one becomes available. These partitions could be all of the same size, or there could be a number of different sized fixed partitions. Therefore, the main points that one needs to remember concerning fixed partitioning are: 1. 2. 3. 4. Advantages of fixed partitioning include: 1. 2. Disadvantages however include the following: 1. 2. 3. 4. Page 32 of 67 AM2020 Operating Systems For the above inefficiencies, fixed partitioning is rarely, if ever, used. Diagrammatic representation: Fixed partitioning, unequal sized partitions Fixed partitioning, equal sized partitions Dynamic Partitioning This type of partitioning is also referred to as variable partitioning and this is so because this much more complex form of partitioning dynamically allocates the partitions as and when necessary. When a task is loaded, it is allocated an appropriate amount of memory. When a new process is created, then a partition is created from a block of free space and the partition size is designed to fit the process which it is to contain within it. Therefore, the main points that one needs to remember concerning dynamic partitioning are: 1. 2. 3. A main disadvantage of dynamic partitioning is the tedious and time consuming process of compaction. Page 33 of 67 AM2020 Operating Systems Compaction Page 34 of 67 AM2020 Operating Systems Diagrammatic representation: Dynamic portioning with external fragmentation 3.2.3 Local vs. Physical Address Spaces The concept of a logical address space that is bound to a separate physical address space is central to proper memory management. Logical Address Physical Address For a better understanding… An application can be given a virtual address space of, for example, 4G. This is its usable memory and it's free to use it as it sees fit. It's a contiguous block of memory (from the view of the application). This is known as the logical address i.e. the address where the item seems to reside from the perspective of the application which is in execution. However, it is not the only application running, and the OS has to mediate between them all. Underneath that contiguous model, there is mapping going on to convert logical to physical addresses. With this mapping, the OS is free to put the application pages anywhere it wants (both in physical memory and swapped out to secondary storage). Page 35 of 67 AM2020 Operating Systems When the application tries to access memory at logical address 50, the OS can translate that to a physical address using translation tables. And, if it tries to access logical memory that's been swapped out to disk, a page fault is raised and the OS can bring the relevant data back into memory, at whatever physical address it wants. 3.2.4 Virtual Memory Virtual memory is a feature of an operating system that enables a process to use a memory (RAM) address space that is independent of other processes running in the same system, and use a space that is larger than the actual amount of RAM present, temporarily relegating some contents from RAM to a disk, with little or no overhead. In a system using virtual memory, the physical memory is divided into equally-sized pages. The memory addressed by a process is also divided into logical pages of the same size. When a process references a memory address, the memory manager fetches from disk the page that includes the referenced address, and places it in a vacant physical page in the RAM. Subsequent references within that logical page are routed to the physical page. When the process references an address from another logical page, it too is fetched into a vacant physical page and becomes the target of subsequent similar references. Therefore, virtual memory allows for the following: 1. 2. Page 36 of 67 AM2020 Operating Systems 3. 4. 5. 3.2.5 Important terms relating to memory addressing Physical Address/Absolute Address Logical Address Compiler Page 37 of 67 AM2020 Operating Systems Relative address Address Translation 3.2.6 Memory Management Allocation In an environment that supports dynamic memory allocation, the memory manager must keep a record of the usage of each allocatable block of memory. This record could be kept by using almost any data structure that implements linked lists. An obvious implementation is to define a free list of block descriptors, with each descriptor containing a pointer to the next descriptor, a pointer to the block, and the length of the block. The memory manager keeps a free list pointer and inserts entries into the list in some order conducive to its allocation strategy. A number of strategies are used to allocate space to the processes that are competing for memory. Memory allocation is the process of assigning blocks of memory on request. Typically the allocator receives memory from the operating system in a small number of large blocks Page 38 of 67 AM2020 Operating Systems that it must divide up to satisfy the requests for smaller blocks. It must also make any returned blocks available for reuse. There are many common ways to perform this, with different strengths and weaknesses. Memory allocation is also known as the process of placement i.e. the placing of process from secondary store into main memory for execution. This placement is based on the application of different algorithms which depends on the types of OS in use. The list provided below identifies different types of algorithms/policies which are available: 1. First Fit 2. Next Fit Page 39 of 67 AM2020 Operating Systems 3. Best Fit 4. Worst Fit Page 40 of 67 AM2020 Operating Systems Diagrammatic example illustrating the Best Fit, Worst Fit and First Fit allocation schemes: 3.2.7 Memory Fragmentation and Compaction In the diagrammatic illustration provided above, notice how the Best Fit and First Fit strategies both leave a tiny segment of memory unallocated just beyond the new process. Since the amount of memory is small, it is not likely that any new processes can be loaded here. This condition of splitting memory into segments as the memory is allocated and deallocated is known as fragmentation. The Worst fit strategy attempts to reduce the problem of fragmentation by allocating the largest fragments to new processes. Thus, a larger amount of space will be left as seen in the diagram above. Fragmentation may be handled by compaction. Compaction is the process of moving processes next to one another towards one end of memory. This will result in having all of the processes contiguous and therefore free space is now gathered into a singer larger hole. The disadvantage with compaction is the time it is takes to actually perform. Fragmentation Page 41 of 67 AM2020 Operating Systems 3.2.8 Program Relocate-ability As mentioned in previous sections above, there are two main types of fragmentation. These are: 1. Internal Fragmentation – the allocated memory space may be slightly larger than the requested memory. Internal fragmentation is the memory that is internal to a partition but which is not being used. 2. External Fragmentation– as processes are loaded and removed from memory the free memory space is broken into little pieces. External fragmentation exists when enough total memory space exists to satisfy a request but it is however not contiguous i.e. storage is fragmented into a large number of small holes. There are several solutions to the problem of external fragmentation. Some of these are: 1. Compaction Already discovered in the previous section, the goal of compaction is to shuffle the memory contents to place all of the free memory together into one large block. The following diagram illustrates compaction performed to group into one, contiguous block, free, currently wasted memory space. Page 42 of 67 AM2020 Operating Systems Compaction is not always possible. In the above drawing Process 3 and Process 4 have been moved. Therefore, in order for these processes to be able to execute in their new locations, all of their internal addresses need to be relocated. If relocation is static, i.e. it is performed at assembly or at load time, then compaction is not possible. Compaction is only possible if relocation is dynamic i.e. if it is done at execution time. 2. Swapping Swapping can also be combined with compaction. A process can be rolled out of main memory to a backing store and rolled in again later. When the process is rolled out the memory space is released and can perhaps be used by another process. When the process is rolled back in several problems may arise – if static relocation is used the process must be rolled into the exact same memory location that it occupied previously. This restriction may require that other processes be rolled out to free that memory. If dynamic relocation is used then a process can be rolled into a different location. In this case, the process is to first find a free block, compact if required, and roll in the process. 3. Paging Paging provides another possible solution to the problem of external fragmentation. Paging allows that the physical address space of a process to be non contiguous therefore allowing a process to be allocated physical memory whenever it is available. One way of implementing this is through the use of a paging scheme. Paging avoids the main problem of fitting the varying-sized memory chunks onto the backing store from which most of the previous memory management schemes suffered. The basic method of paging Physical memory is broken into fixed-sized blocks which are called frames. Logical memory is also broken down into block of the same size called pages. When a process is executed, its pages are loaded into any available memory frames from backing store. The backing store is divided into fixed sized blocks that are of the same size as the memory frames. Page 43 of 67 AM2020 Operating Systems In the above diagram illustration every address generated by the CPU is divided into two parts: a page number (p) and a page offset (f). The page number is used as an index into a page table. The page table contains the base address of each page in physical memory. The base address is combined with the page offset to define the physical memory address that is sent to the memory unit. The page size, as well as the frame size, is defined by the hardware. The size of a page is typically a power of 2 varying in byte seizes depending on the computer architecture. When we use a paging scheme, we have no external fragmentation since any free frame can be allocated to a process that needs it. However, we may have some internal fragmentation. It is also important to note that because the operating system is managing physical memory, it must be aware of the allocation details of physical memory i.e. which frames are allocated, which frames are available, how many total frames there are etc… This information is usually kept in a frame table. The frame table has one entry for each physical page frame indicating whether the latter is free or allocated and if it is allocated to which page of which process/es. Also, the OS must be aware that user processes operate in user space, and all logical addresses must be mapped to produce physical addresses. If a user makes a system call (for example, to do an I/O) and provides an address as a parameter (for example, a buffer), that address must be mapped to produce the correct physical address. The OS maintains a copy of the page table for each process, just as it maintains a copy of the instruction counter and register contents. This copy if used to translate the logical addresses to physical Page 44 of 67 AM2020 Operating Systems addresses whenever the OS has to map a logical address to a physical address manually. It is also used by the CPU dispatcher to define the hardware page table when a process is to be allocated the CPU. Paging therefore increases the context-switch time. In terms of the optimal page size, one may say that there is no optimum page size. If a page size if too small, processes will need to make use of many page frames, resulting in extensive page tables taking up memory space. Contrarily, if the page size is large, then page tables with inevitably be shorter, but there will be a lot of wastage since each process must occupy an integral number of page frames (therefore internal fragmentation will be higher). Therefore one may summarise the following: Large page size Small page size Alternative paging diagram… Page 45 of 67 AM2020 Operating Systems 3.2.9 Memory Store Protection Protection of memory is essential especially in an environment where multiple processes are loading into and out from memory. This protection involves processes not being able to access areas of memory which are allocated to other, different processes. Should it be the case that a particular process is erroneous, this way easily corrupt other processes should memory partitions be access by more than one process and it might cause the entire OS to crash. Protection involves ensuring that a process only access memory within its own allocated partition. Usually done by hardware, CPUs which contain a base register to facilitate relocation usually also include a limit register to enforce protection. The operating system loads the limit/end address outside the base and limit assigned by the operating system. If the CPU detects a memory violation, an interrupt is generated. The OS can then take any remedial action necessary, for example, to terminate the process. Memory protection in a paged environment is accomplished by using protection bits which are associated to each frame. Usually, these protection bits are kept within the page table. One bit can define a page to be read and write or read-only. Every reference to memory goes through the page table to find the correct frame number. At the same time that physical address is being computed, the protection bits can be checked to verify that no writes are being made to a read-only page. Any attempt to write to a read-only page will cause a memory protection violation to the OS. Page 46 of 67 AM2020 Operating Systems 3.2.10 File Organization Computers can store information on several different storage media, such as magnetic disks, magnetic tapes, and optical disks. So that the computer system will be convenient to use, the OS provides a uniform logical view of information storage. The OS abstracts from the physical properties of its storage devices to define a logical storage unit, known as the file. Files are mapped by the OS onto physical devices, which are usually non-volatile. The following are common points relating the files: i. Usually non-volatile i.e. the contents are not lost through power failures and system reboots. ii. Named collection of related information that is recorded on secondary storage. iii. Commonly, files represent programs (source and object forms) and data. Data files may be numeric, alphabetic, alphanumeric or binary. iv. In general a file is a sequence of bits, bytes, lines or records whose meaning is defined by the file’s creator and user. These types may include: 1. 2. Page 47 of 67 AM2020 Operating Systems 3. 4. 5. 6. 7. 8. 3.2.11 File Attributes A file has certain attributes which may vary from one OS to another, but typically consist of the following: 1. Name 2. Type 3. Location 4. Size Page 48 of 67 AM2020 Operating Systems 5. Protection 6. Time, date and user identification 3.2.12 File Operations A file is what is known as an abstract data type. The OS provides system calls to create, write, read, reposition, delete and truncate files. The following are the considerations on what the OS must do for each of these 6 basic file operations. 1. Creating a file 2. Writing a file Page 49 of 67 AM2020 Operating Systems 3. Reading a file Note: 4. Repositioning a file Page 50 of 67 AM2020 Operating Systems 5. Deleting a file 6. Truncating a file Page 51 of 67 AM2020 Operating Systems Other common file operations include: Appending new information to the end of an existing file Renaming an existing file Creating a copy of a file. When opening a file, the following pieces of information are associated with this action: i. File pointer The file pointer is used to keep track of the last read/write location as the current-file-position pointer (usually replaces by a file offset). ii. File open count As files are closed, the OS must reuse its open-file table entries (retained by the OS to keep information about open files), or it could run out of space in the table. Because multiple processes may open a file, the system must wait for the last file close before removing the open-file table entry. This counter tracks the number of opens and closes, and reaches zero on the last close. The system can then remove the entry. iii. Disk location of Most file operations require the system to modify data within the file the file. The information needed to locate the file on disk is kept in memory to avoid having to read it from disk for each operation. 3.2.13 File Structure In order to improve efficiency, I/O transfers between memory and disk (which provides the bulk of secondary storage on which a file system is maintained), are performed in units of blocks. Each block is one or more sectors. Disks have two important characteristics that make them convenient to store multiple files: 1. 2. The first reference to a file, normally ‘open’, causes the directory structure to be searched and the directory entry for this file to be copied into the table of operand files. The index into this table is returned to the user program and all further references are made through Page 52 of 67 AM2020 Operating Systems the index (a file descriptor or file control block (FCB)), rather than with a symbolic name. A file control block is the unit of a storage space found on a storage device such as a hard disk whereby this block will contain the actual data of a file up to a maximum storage capacity as defined by the block size. The information you would expect to find within the file control blocks are: 1. 2. 3. These are managed by the users’ file directory which keeps track of the addresses of the blocks, the start address and the end address, depending on the file management system being used. Example: contiguous, chained and indexed. 3.2.14 File Access Methods Files store information and when it is used, this information must be accessed and read into computer memory. There are several ways that the information in the file can be accessed. Some systems provide only one access method for files, whilst other systems such as those if IBM provide a number of different access methods which are supported and choosing the right one for a particular application is a major design consideration. Sequential Access The simplest access method is sequential access. Information in the file is processed in order, one record after the other. This mode of access is by far the most common e.g. editors and compilers usually access files in this way. The bulk of operations on a file are reads and writes. A read operation reads the next portion of the file and automatically advances a file pointer, which tracks the I/O location. Similarly, a write appends to the end of the file and advances to the end of the newly written material (the end of file). Page 53 of 67 AM2020 Operating Systems Direct/Relative Access A file is made up of fixed length logical records that allow programs to read and write records rapidly in no particular order. The direct access method is based on a disk model of a file, since disks allow random access to any file block. For direct access, the file is viewed as a numbered sequence of blocks or records. A direct-access file allows arbitrary blocks to be read or written. Therefore, we may read block 13, then block 68, and then write to block 7. There are no restrictions on the order of reading or writing for a direct-access file. Direct access files are of great use for immediate access to large amounts of information. Databases are often of this type. When a query concerning a particular subject arrives, we compute which block contains the answer and then read that block directly to provide the desired information. Indexed Access Also known as ISA (Indexed Sequential Access). This access method is a slight modification of the direct access method. It is in fact a combination of both the sequential access as well as direct access. The main concept is to access a file direct first and then sequentially from that point onwards. This access method involves maintaining an index. The index is a pointer to a block. To access a record in a file, a direct access of the index is made. The information obtained from this access is used to access the file. For example, the direct access to a file will give the block address and within the block the record is accessed sequentially. Sometimes indexes may be big (especially when the files themselves are large in size). So hierarchies of indexes are built in which one direct access of Page 54 of 67 AM2020 Operating Systems an index leads to info to access another index directly and so on till the actual file is accessed sequentially for the particular record. The main advantage in this type of access is that both direct and sequential access of files is possible. 3.2.15 File Allocation of Storage Space Sometimes also referred to as Backing Store Management, the file allocation method is the way disk space is allocated and managed. It manages data to be written or to be read to or from backing store devices such as hard disks. The file allocation table is the data structure used to keep track of the allocation portions. There are three main allocation methods and these are: i. Contiguous ii. Chained/Linked iii. Indexed Contiguous Allocation The contiguous allocation method requires each file to occupy a set of contiguous blocks on disk. Disk addresses define a linear ordering on the disk and with this ordering, accessing block b+1 after block b requires no head movement since it is only on one track Note: head movement is normally needed when movement is made from the last sector of one cylinder to the first sector of the next cylinder. Therefore, the number of disk seeds required for accessing contiguously allocated files is minimal. Contiguous allocation of a file is defined by the disk address and length (in block units) of the first block. If the file is n blocks long, and starts at location b, then it occupies blocks Page 55 of 67 AM2020 Operating Systems b, b+1, b+2,…., b+(n-1). The directory entry for each file indicates the address of the starting block and the length of the area allocated for this file. Accessing a file that has been allocated contiguously is easy: 1. 2. Contiguous allocation of disk space Contiguous allocation supports therefore both sequential and direct access. Page 56 of 67 AM2020 Operating Systems Problems with contiguous allocation: i. One difficulty with contiguous allocation is finding space for a new file, particularly in Dynamic storage allocation i.e. to satisfy a request of size n from a list of free holes. First fit and best fit are the most common strategies used to select a free hole from the set of available holes. Next fit is also commonly used, the worst of which being of course, worst fit in terms of storage allocation. These algorithms suffer from external fragmentation – as files are allocated and deleted from memory the free disk space is broken into little pieces resulting in external fragmentation as free space is broken into little pieces which are scattered. ii. A major problem is determining how much space is needed for a file. When a file is created, the total amount of space it will need must be found and allocated. How does the creator know the size of the file to be created? In some cases this may be easy, say if a file is copied, however in general the size of an output file may be difficult to estimate. Problems can arise when files grow, or if the exact size of a file is unknown at creation time: 1. 2. 3. Page 57 of 67 AM2020 Operating Systems Chained/Linked Allocation Linked allocation solves the problems of contiguous allocation. Here, each file is a linked list of disk blocks, and the blocks may be scattered anywhere on the disk. The directory then contains a pointer to the first and last block of the file. For example, a file of five blocks might start at block 9, continue at block 16, then to block 1, block 10 and finally block 25. Each block contains a pointer to the next block. These pointers are not made available to the user i.e. if each block is 512 bytes in size of which 4 bytes are allocated for the pointer, the user will see a block of 508 bytes. Chained/Link allocation of disk space In order to create a new file, you simple create a new entry in the directory. With linked allocation, each directory entry has a pointer to the first disk block of the file (initialised to nil signifying an empty file). Advantages: 1. No external fragmentation 2. Any free block on the free-space list can be used to satisfy a request 3. No need to declare the size of a file when created as it can continue to grow as long as there are free blocks Disadvantages: Page 58 of 67 AM2020 Operating Systems 1. Can only be used effectively for sequential-access files (to find the ith block of a file, we must start at the beginning of that file) 2. Inefficient to support direct-access capability for linked allocation files 3. Space required for pointers 4. Lack of reliability – since files are linked together by pointers scattered all over the disk, a bug in the system software (OS) or a disk hardware failure might result in selecting the wrong pointer. File-Allocation Table (FAT) FAT is an important variation in use on the linked allocation method. A section of disk at the beginning of each partition is set aside to contain the table. The table has one entry for each disk block and is indexed by block number. The FAT is like a linked list. The directory entry contains the block number of the first block of the file. The table entry indexed by that block number then contains the block number of the next block in the file. This chain continues until the last block (which has a special end of file value within the table entry. Page 59 of 67 AM2020 Operating Systems Indexed Allocation Indexed allocation solves the problem of inefficient direct access offered through linked allocation. In this scheme, all the pointers are brought together into one location known as the index block. 1. 2. 3. Indexed Allocation of disk space When the file is created, all pointers in the index block are set to nil. When the i th block is first written, a block is obtained from the free-space manager, and its address if put in the ith index-block entry. Page 60 of 67 AM2020 Operating Systems Advantages: 1. Supports direct access. 2. Does not suffer from external fragmentation (since any free block on disk may satisfy a request for more space). 3. No wasted space. 4. Pointer overhead of the index block in generally greater than the pointer overhead of linked allocation. 3.2.16 Hash Table Suppose we have a table consisting of 1000 entries and suppose that we run a club with no more than 1000 members, each with a five digit secret membership code. The main problem is…how do we map the membership code to these 1000 storage locations on our table? One method that gives very quick access to the stored data is called hashing. A hash table is a data structure that is used for a file directory. In this method a linear list stores the directory entries but a hash data structure is also used. The hash table takes a value computed from the file name and returns a pointer to the file name in the linear list. Therefore, it can greatly decrease the directory search time. Insertion and deletion although straightforward, require prevention against collisions i.e. situations where two file name hash to the same location. The main difficulty with hash tables are the fixed size of the hash table and the dependency of the hash function on the size of the hash table. 3.2.17 Protection of Files When information is kept in a computer system, a major concern is its protection from both physical damage (reliability) and improper access (protection). Reliability is generally provided by duplicate copies of file. Many computers have system programs that automatically (or through computer-operator intervention) copy disk files to tape at regular intervals (say once per day or week) in order to maintain a copy should a file system be accidentally destroyed. Page 61 of 67 AM2020 Operating Systems File systems can also be damaged by: Hardware problems (e.g. errors in reading and writing) Power failures Head crashes Dirt Temperature extremes Vandalism. Files may also be deleted accidentally. Bugs in a file-system software can also cause file contents to be lost. Protection can be provided in many ways. For a small single-user system, we might provide protection by physically removing DVDs containing backups or information and locking them in a file cabinet. When multi-user systems are involved however, other mechanisms are needed. Controlled access - protection mechanisms provide controlled access by limiting the types of file access that can be made. Access is permitted or denied depending on several factors, one of which is the type of access requested. Several different types of operations can be controlled: Read Read from the file Write Write or rewrite the file Execute Load the file into memory and execute it Append Write new information at the end of the file Delete Delete the file and free its space for possible reuse List List the name and attributes of the file. Protection is provided at only the lower level i.e. copying a file may be implemented simply by a sequence of read requests. In this case, a user with read access can also cause the file to be copied, printed and so on… Access Lists – a common approach is to make access dependent on the identity of the user. The most basic scheme to implement identity-dependent access is to associate with each file and directory and access-list (this specifies the username and the types of access allowed for each user). The main problem with an access list is the length. If we want to allow everyone to read a file, we must list all users and so on… To prevent this inconvenience, we can make use of a condensed version of the access list. Page 62 of 67 AM2020 Operating Systems To condense the length f the access list, many systems recognize three classifications of users in connection with each file: i. Owner The user who created the file ii. Group A set of users who are sharing the file and need similar access is a group or workgroup iii. Universe All others users in the system 3.3 Handling of I/O Operations To be discussed under the topic of Computer Architecture 3.4 Interrupt Handling To be discussed under the topic of Computer Architecture IMPORTANT Refer to extra notes provided for the 7 Layers of an Operating System which define the basis of the structure of any OS. These are listed as the following: 1. Kernel 2. Memory Management 3. I/O Control 4. Backing Store Management 5. Resource Allocation 6. Protection 7. Accounting Page 63 of 67 AM2020 Operating Systems THE STRUCTURE OF AN OPERATING SYSTEM The following is a diagram listing the main components forming the basic structure of any OS. All of the below layers have been discussed in detail throughout this chapter. Kernel: Lowest level of any OS. Provides services for the following: 1. Handling of interrupts i.e. a signal generated by a source which causes a break in the execution of the current routine e.g. printer out of ink or paper. 2. Allocation of work to the processor i.e. normally more than one job would exist in the system simultaneously and the kernel provides that switching can occur from one job to the next. 3. Co-ordination of processes i.e. the OS provides the mechanism for switching the processor from one job to another. The OS shares the processor time amongst the different tasks in the system in a way so as to maximise processor utilization. Memory Management: The OS manages allocation of memory to different processes/jobs given that the main memory is too small to handles all programs and data at one time. Partitions in RAM are allocated to processes for execution. Everything else is contained within backing store. Use of virtual memory – this technique is implemented by this layer of the OS and it makes the computer appear to have more memory that it physically has. Page 64 of 67 AM2020 Operating Systems Caching – a technique used to enhance system performance. Some very fast memory called Cache Memory, based on SRAM, is reserved to store sections of the executing routines in order to take advantage of the short access and reducing the fetch-decode and execute time cycle. I/O Control: The OS provides transparency to the user for the difference in speed of the different I/O devices. I/O devices are said to being ‘device depended’ from a user’s point of view since to a user all devices have the same characteristics and are all instructed in the same way. Deals with the physical aspects of the data transfer leaving the programmer free to concentrate on the logical aspects of the data structure transferred. Spooling – data to be outputted is held on a spool queue on backing store or a buffer until the output device is ready for it. Spooling: simultaneous peripheral operations on-line Spooling refers to putting jobs in a buffer, a special area in memory or on a disk where a device can access them when it is ready. Spooling is useful because devices access data at different rates. The buffer provides a waiting station where data can rest while the slower device catches up. The most common spooling application is print spooling. In print spooling, Documents are loaded into a buffer (usually an area on a disk), and then the printer pulls them off the buffer at its own rate. Because the documents are in a buffer where they can be accessed by the printer, you can perform other operations on the computer while the printing takes place in the background. Spooling also lets you place a number of print jobs on a queue instead of waiting for each one to finish before specifying the next one Page 65 of 67 AM2020 Operating Systems Backing Store Management: OS manages how programs and data are stored on Secondary Storage. Data in backing store are kept in files. The OS is responsible for the creation, deletion and updating of these files. Some files can be shared whilst others are private and the OS has to make sure that users respect these restrictions/privileges. This layer of the OS must of course cooperate with the previous one i.e. Memory management, during the transfer of data to and from backing store. Resource Allocation and Scheduling: Resources are usually shared by a number of users and because of this their demand is usually greater than their availability. Resource allocation – the OS must apply a policy for which user gets serviced first. FIFO scheduling algorithm is simplest but easies to result in a deadlock i.e. two or more processes prevent each other form continuing because each claims a resource that is processed by another. The OS therefore has to enforce against the occurrence of deadlocks. Scheduling – user to enforce effacing use of the processor. Its task is to allocate processor time to processes. Various scheduling algorithms are available. Protection: OS must protect the system against error and system malfunction. It should detect and diagnose errors in application programs as soon as possible. The OS should also protect software from system malfunction – achieved by a periodic dump of certain files into a suitable backup medium. Also, power failure is immediately recognized and essential contents of the CPU are saved on disk before the power has fallen to a level where the system has to be shut down (milliseconds). Protection against abuse is more difficult to deal with. Protection mechanisms designed to prevent unauthorised activities are not always fool proof. Accounting: The OS must keep record of the changes incurred by each user program such as processor time, use of backing store and printer paper. Information is passed to the accounting module from the scheduler, to establish how much processor time a program has used. Most multi-user OS’s take note of the number of times a user logs on/off from a system. Page 66 of 67 AM2020 Operating Systems Some OS’s keep track of all actions of a user by logging all one’s commands to a file. Page 67 of 67