Document Details

EvocativeOrangutan5036

Uploaded by EvocativeOrangutan5036

Universiti Sains Malaysia

2018

Mchoes/Flynn

Tags

device management operating systems I/O devices computer science

Summary

This document, Chapter 7 of 'Understanding Operating Systems', provides a detailed overview of device management, covering various device types, performance measures, and seek strategies. It explains different types of devices and their characteristics, including how they are used and connected. The book is likely aimed at computer science undergraduates and higher-level students.

Full Transcript

Device Management Chapter 7 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, cop...

Device Management Chapter 7 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 1 Learning Objectives After completing this chapter, you should be able to describe: How dedicated, shared, and virtual devices compare How blocking and buffering can improve I/O performance How seek time, search time, and transfer time are calculated How the access times for several types of devices differ The strengths and weaknesses of common seek strategies How levels of R A I D vary from each other 2 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Three categories: dedicated, shared, and virtual Dedicated device Types Assigned to one job at a time of For entire time that job is active (or until released) Devices Examples: tape drives, printers, and plotters (1 of 3) Disadvantage Must be allocated for duration of job’s execution Inefficient if device is not used 100 percent of time Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 3 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Types of Devices (2 of 3) Shared device Assigned to several processes Example: direct access storage device (D A S D) Processes share D A S D simultaneously Requests interleaved Device manager supervision Controls interleaving Predetermined policies determine conflict resolution 4 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Types of Devices (3 of 3) Virtual device Dedicated and shared device combination Dedicated device transformed into shared device Example: printer can be converted into sharable devices by using a spooling program that reroutes all print requests via a storage space on a disk.Spooling: speeds up slow dedicated I/O devices Universal serial bus (U S B) controller Interface between operating system, device drivers, applications, and devices attached via U S B host Assigns bandwidth to each device: priority-based High -real-time exchanges where no interruption (movie) medium -can allow occasional interrupts (Keyboard) Low priority -exchanges that can accommodate slower data flow (file update) 5 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Local Essentiall operating y the system’s role same role I/O in accessing performed accessing remote I/O local Devices devices devices in the Cloud provides access Cloud to many more devices Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 6 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Magnetic tape Early computer systems: routine Sequentia secondary storage Records stored serially Record length determined by l Access application program Record identified by position on Storage tape Record access Media (1 Tape rotates passing under read/write head: only when of 6) access requested for read or write Time-consuming process 7 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Sequential Access Storage Media (2 of 6) Nine-track magnetic tape with three characters recorded using odd parity. A half-inch wide reel of tape, typically used to back up a mainframe computer, can store thousands of characters, or bytes, per inch. Tape density: characters recorded per inch o Depends upon storage method (individual or blocked records) Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 8 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Interrecord gap (I R G) The tape needs time and space to stop, so a gap is Sequent inserted between each record. ½ inch gap inserted between each record Same size regardless of sizes of records it ial separates Blocking: group records into blocks Access Transfer rate: (tape density) × (transport speed) Storage Interblock gap (I B G) Media ½ inch gap inserted between each block More efficient than individual records and I R G (3 of 6) Optimal block size Entire block fits in buffer Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 9 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Sequential Access Storage Media (4 of 6) 1 Each record requires10 only inch of tape. When 10 records are stored individually on magnetic tape, they are separated by I RGs, which adds up to 4.5 inches of tape. This totals 5.5 inches of tape. Two blocks of records stored on magnetic tape, each preceded by an I BG of ½ inch. Each block 1 holds 10 records, inch. each of which is still 10 The block, however, is 1 inch, for a total of 1.5 inches. Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 10 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Blocking advantages Sequent Blocki Fewer I/O operations needed-a single READ command can move an entire block ng ial Less wasted tape space Access Storage Blocking disadvantages Media Blocki Overhead and software routines needed for blocking, deblocking, and record ng (5 of 6) keeping Buffer space wasted Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 11 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Directly read or write to specific Random access storage devices disk area Magnetic disks Three categories Optical discs Solid state (flash) memory Access time Not as wide as magnetic tape Record location directly affects access time variance Direct Access Storage Devices Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 12 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Computer hard drives Single platter or stack of magnetic platters Magnet ic Disk Storage (1 of 2) 1 3 (figure 7.6) A disk pack is a stack of magnetic platters. The read/write heads move between each pair of surfaces, and all of the heads are moved in unison by the arm. Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Two recording surfaces (top and bottom) Each surface formatted-where the Magnet data is recorded Concentric tracks: numbered from track 0 ic Disk on outside to highest track number in center Read/write heads move in unison: Storage virtual cylinder (2 of 2) Accessing a record: system needs three things Cylinder number Surface number Sector number Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 14 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. File access time factors Seek time (slowest) Time to position read/write head on track Does not apply to fixed read/write head devices Access Search time Rotational delay Times Time to rotate D A S D Rotate until desired record under read/write head Transfer time (fastest) Time to transfer data Secondary storage to main memory transfer 15 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Fixed-Head Magnetic Drives Record (1 of 2) access Track number and requires record number two items Total access time = search time + transfer time Three basic positions D A S Ds for requested record rotate in relation to continuous read/write head ly position Blocking minimizes access time 16 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Fixed-Head Magnetic Drives (2 of 2) As a disk rotates, Record 1 may be near the read/write head and ready to be scanned, as seen in (a); in the farthest position just past the head, (c); or somewhere in between, as in the average case, (b). Benchmarks Access Time Maximum Access Time 16.8 ms + 0.00094 ms/byte Access times for a fixed-head disk drive at 16.8 ms/revolution. Average Access Time 8.4 ms + 0.00094 ms/byte Sequential Access Time Depends on1 the length of the record (known transfer rate) 7 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Movable-Head Magnetic Disk Drives Access time Search time Blocking: = seek time and transfer good way to + search time minimize time + calculation access time transfer time Same as fixed-head D ASD 18 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Device Handler Seek Strategies (1 of 7) Predetermined Determines device processing order Goal: minimize seek time device handler First-come, first-served (F C F S); shortest seek time Types first (S S T F); SCAN (including LOOK, N-Step SCAN, C-SCAN, and C-LOOK) Scheduling Minimize arm movement, mean response time, and variance in response time algorithm goals 19 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Device Handler Seek Strategi (figure 7.10) The arm makes many time-consuming movements as it travels from track to track to satisfy all requests in F C F S order. es (2 of © Cengage Learning 2018 2 7) 0 First-come, first-served (F C F S) o On average: does not meet the three seek strategy goals o Disadvantage: extreme arm movement Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Device Handler Seek Strategies (3 of 7) Shortest seek time first (S STF) o Requests with track closest to one being served o Minimizes overall seek time o Postpones traveling to out of way tracks (figure 7.11) Using the S STF algorithm, with all track requests on the wait queue, arm movement is reduced by almost one third while satisfying the same requests shown in Figure 7.10, which used the F CFS algorithm. © Cengage Learning 2018 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 21 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Device Handler Seek Strategies (4 of 7) SCAN Directional bit Indicates if arm moving toward/away from disk center Algorithm moves arm methodically From outer to inner track: services every request in its path When innermost track reached: reverses direction and moves toward outer tracks Services every request in its path 22 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Device Handler Seek Strategies (5 of 7) LOOK (elevator algorithm) o Arm does not go to either edge Unless requests exist o Eliminates indefinite postponement (figure 7.12) The LOOK algorithm makes the arm move systematically from 2 3 the first requested track at one edge of the disk to the last requested track at the other edge. In this example, all track requests are on the wait queue. © Cengage Learning 2018 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. N-Step SCAN Holds all requests until arm starts on way back Device New requests grouped together for next sweep Handler C-SCAN (Circular SCAN) Seek Arm picks up requests on path during inward sweep Strategi Provides more uniform wait time C-LOOK es (6 of Inward sweep stops at last high- 7) numbered track request No last track access unless required 24 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Device Handler Seek Strategies (7 of 7) Best strategy F C F S best with light loads Service time unacceptably long under high loads S S T F best with moderate loads Localization problem under heavy loads SCAN best with light to moderate loads Eliminates indefinite postponement Throughput and mean service times S S T F similarities C-SCAN best with moderate to heavy loads Very small service time variances 25 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Optical Disc Storage Design features o Single spiralling track (1 of 2) o Same-sized sectors: from center to disc rim 2 6 o Spins at constant linear velocity (C L V) o More sectors and more disc data than magnetic disk (figure 7.13) On an optical disc, the sectors (not all sectors are shown here) are of the same size throughout the disc. The Mchoes/Flynn, Understanding Operating disc drive Systems, changes speed 8th Edition. to compensate, but it © 2018 spins at aCengage. All velocity constant linear Rights (CReserved. L V). May not be scanned, copied or duplicated, © Cengage or posted to a publicly accessible website, in whole or in part. Learning 2018 Two important performance measures Sustained data-transfer rate Speed to read massive data Optical amounts from disc Measured in megabytes per second Disc (Mbps) Crucial for applications requiring Storage sequential access Average access time( non sequential) (2 of 2) Average time to move head to specific disc location Third feature Expressed in milliseconds (ms) Cache size (hardware) Buffer to transfer data blocks from disc Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 27 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. CD Data recorded as zeros and ones Pits: indentations Lands: flat areas C D and Reads with low-power laser DVD Light strikes land: reflects to photodetector Technolo Light striking a pit: scattered and absorbed gy (1 of Photodetector converts light intensity into digital signal 4) 28 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. C D-R (compact disk recordable) technology Requires expensive disk controller Records data using write-once C D and technique Data cannot be erased or modified DVD Disk Technolo Contains several layers Gold reflective layer and dye layer gy (2 of Records with high-power laser Permanent marks on dye layer 4) C D cannot be erased after data recorded Data read on standard C D drive (low- power beam) 29 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. C D-R W and D V D-R W: rewritable discs Data written, changed, and erased C D and Uses phase change technology Amorphous and crystalline DVD phase states Record data: beam heats up disc Technolog State changes from crystalline to amorphous y (3 of 4) Erase data: low-energy beam to heat up pits Loosens alloy to return to original crystalline state 30 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. C D and D V D Technology (4 of 4) D V Ds: compared to C Ds Similar in design, shape, and size Differs in data capacity Dual-layer, single-sided D V D holds 13 C Ds Single-layer, single-sided D V D holds 8.6 GB (M P E G video compression) Differs in laser wavelength Uses red laser (smaller pits, tighter spiral) 31 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Same physical size as DVD/CD Smaller pits Blu-ray Disc More tightly wound tracks Technolo gy Use of blue-violet laser allows multiple layers Formats: BD-ROM (read only), BD-R (recordable), and BD-RE (rewritable) Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 32 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Implements Fowler-Nordheim tunneling phenomenon Solid Stores electrons in a floating gate State transistor Electrons remain even after power Storage is turned off 33 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Electrically Nonvolatile and erasable, removable programmable, Emulates random access and read-only Difference: data stored securely memory (E E P (even if removed) R O M) Flash Memor Write data: electric charge sent through floating gate y Storage Erase data: strong electrical field (flash) applied Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 34 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Fast but currently pricey storage devices Typical device functions in smaller physical space than magnetic drives Work electronically: no moving Solid parts Require less power; silent; relatively State lightweight Drives Disadvantages Catastrophic crashes: no warning messages Data transfer rates: degrade over time Hybrid drive Combines S S D and hard drive technology Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 35 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. I/O channels Programmable units Compone Positioned between CPU and control unit nts of the Synchronize device speeds I/O CPU (fast) with I/O device (slow) Subsyste Manage concurrent processing CPU and I/O device requests m (1 of 4) Allow overlap CPU and I/O operations 36 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Specifies action performed by devices I/O channel Controls data program transmission between main memory and control units Componen I/O control unit: receives and ts of the interprets signal I/O Disk controller Subsyste (disk drive Links disk drive and system bus interface) m (2 of 4) I/O subsystem Multiple paths increase configuration flexibility and reliability Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 37 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Components of the I/O Subsystem (3 of 4) (figure 7.18) Typical I/O subsystem configuration. If Control Unit 2 should become unavailable for any reason, the Device Manager cannot access Tape 1 or Tape 2. © Cengage Learning 2018 38 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Components of the I/O Subsystem (4 of 4) (figure 7.19) I/O subsystem configuration with multiple paths, increasing the system’s flexibility and reliability. With two additional paths, shown with dashed lines, if Control Unit 2 malfunctions, then Tape 2 can still be accessed via Control Unit 3. © Cengage Learning 2018 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 39 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Problems to resolve Know which components are busy/free Solved by structuring Communicati interaction between units Accommodate requests during on Among heavy I/O traffic Devices (1 of Handled by buffering records and queuing requests 4) Accommodate speed disparity between CPU and I/O devices Handled by buffering records and queuing requests 40 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. I/O subsystem units finish independently of others CPU processes data while I/O performed Communicat ion Among Success requires device completion knowledge Devices (2 of Hardware flag tested by CPU 4) Channel status word (C S W) contains flag Three bits in flag represent I/O system component (channel, control unit, device) Changes zero to one (free to busy) Flag tested with polling and interrupts Interrupts Mchoes/Flynn, Understanding Operating are©more Systems, 8th Edition. efficient 2018 Cengage. way All Rights to Reserved. 41 May not be scanned, copied or duplicated, testor posted flag to a publicly accessible website, in whole or in part. Direct memory access (D M A) Allows control unit main memory access directly Transfers data without the intervention of CPU Communicat Used for high-speed devices (disk) ion Among Buffers Devices (3 of Temporary storage areas in main 4) memory, channels, control units Improves data movement synchronization Between relatively slow I/O devices and very fast CPU Double buffering: record processing by CPU while another is read or written by channel Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 42 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Communication Among Devices (4 of 4) (figure 7.20) Example of double buffering: (a) the CPU is reading from Buffer 1 as Buffer 2 is being filled; (b) once Buffer 2 is filled, it can be read quickly by the CPU while Buffer 1 is being filled again. © Cengage Learning 2018 43 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Physical disk drive set Preferable over few large- viewed as single capacity disk drives logical unit Improved I/O performance Improved data Disk failure event RAID recovery Introduces (1 of 3) redundancy Helps with hardware failure recovery Significant factors in R Cost, speed, system’s A I D level selection applications Increases hardware costs Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 44 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. R A I D (2 of 3) (figure 7.21) Data being transferred in parallel from a Level 0 R A I D configuration to a large-capacity disk. The software in the controller ensures that the strips are stored in correct order. © Cengage Learning 2018 45 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. RAID Error Correction Method I/O Request Data Transfer Level Rate Rate 0 None Excellent Excellent RAID (3 of 3) 1 Mirroring Read: Good Read: Fair Write: Fair Write: Fair (table 7.7) 2 Hamming code Poor Excellent The seven standard 3 Word parity Poor Excellent levels of R A I D provide various 4 Strip parity Read: Read: Fair Excellent Write: Poor degrees of error Write: Fair correction. Cost, speed, and the 5 Distributed strip parity Read: Read: Fair system’s applications Excellent Write: Poor are significant factors Write: Fair to consider when 6 Distributed strip parity and Read: Read: Fair choosing a level. independent data check Excellent Write: Poor © Cengage Learning Write: Poor 2018 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 46 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Level Zero Uses data striping (not considered true R A I D) o No parity and error corrections o No error correction/redundancy/recovery Benefits o Devices appear as one logical unit o Best for large data quantity: non-critical data (figure 7.22) 4 R A I D Level 0 with four disks in the array. Strips 1, 2, 3, and 4 7 make up a stripe. Strips 5, 6, 7, and 8 make up another stripe, and so on. © Cengage Learning 2018 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Level One Uses data striping (considered true R AID) o Mirrored configuration (backup) Duplicate set of all data (expensive) o Provides redundancy and improved reliability (figure 7.23) RAID Level 1 with three disks in the main array and three corresponding disks in the backup array—the mirrored array. Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 48 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Level Two Uses small strips (considered true R A I D) Hamming code: error detection and correction Expensive and complex oSize of strip determines number of array disks (figure 7.24) 4 9 R A I D Level 2. Seven disks are needed in the array to store a 4-bit data item, one for each bit and three for the parity bits. Each disk stores either a bit or a parity bit based on the Hamming code used for redundancy. Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Modification of Level 2 oRequires one disk for redundancy One parity bit for each strip Level Three 5 0 (figure 7.25) R A I D Level 3. A 4-bit data item is stored in the first four disks of the array. The fifth disk is used to store the parity for the stored data item. Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Same strip scheme as Levels 0 and 1 oComputes parity for each strip oStores parities in corresponding strip Has designated parity disk Level Four 5 1 (figure 7.26) R A I D Level 4. The array contains four disks: the first three are used to store data strips, and the fourth is used to store the parity of those strips. Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Level Five Modification of Level 4 Distributes parity strips across disks o Avoids Level 4 bottleneck Disadvantage o Complicated to regenerate data from failed device (figure 7.27) R A I D Level 5 with four disks. Notice how the parity strips are 5 distributed among the disks. 2 © Cengage Learning 2018 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Level Six Provides extra degree of error Advantage: data restoration even protection/correction if two disks fail Two different parity calculations (double parity) Same as level four/five and independent algorithm Parities stored on separate disk across array Stored in corresponding data strip (figure 7.28) RAID Level 6. Notice how parity strips and data check (D C) strips are distributed across the disks. © Cengage Learning 2018 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 53 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Nested R A I D Levels (1 of 2) Combines multiple R A I D levels (complex) This is a simple nested R A I D Level 10 system, which is a Level 0 system consisting of two Level 1 systems. Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Nested R A I D Nested Level 01 (or 0+1) Combinations A Level 1 system Levels (2 of 2) consisting of multiple Level 0 systems (table 7.8) 10 (or 1+0) A Level 0 system consisting of multiple Level 1 systems Some common nested R A I 03 (or 0+3) A Level 3 system D configurations, always consisting of multiple Level 0 systems indicated with two numbers 30 (or 3+0) A Level 0 system that signify the consisting of multiple Level 3 systems combination of levels. For 50 (or 5+0) A Level 0 system example, neither Level 01 consisting of multiple nor Level 10 is the same as Level 5 systems 60 (or 6+0) A Level 0 system Level 1. consisting of multiple Level 6 systems 55 Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Device Manager Manages every system device effectively as possible Devices Conclusi Vary in speed and sharability degrees Direct access and sequential access on (1 of Magnetic media: one or many read/write heads 3) Heads in a fixed position (optimum speed) Move across surface (optimum storage space) Optical media: disk speed adjusted Data recorded/retrieved correctly Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 56 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Flash memory: device manager tracks U S B devices Assures data sent/received correctly Conclusi I/O subsystem success on (2 of dependence 3) Communication linking channels, control units, and devices Seek strategies: advantages and disadvantages (summarized in Table 7.9) Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 57 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Strategy Advantages Disadvantages Conclusion (3 FCFS Easy to implement Doesn’t provide best average Sufficient for light loads service of 3) Doesn’t maximize throughput SSTF Throughput better than F C F S May cause starvation of some (table 7.9) Tends to minimize arm movement requests Localizes under heavy loads Tends to minimize arm response Comparison of hard time SCAN/LOOK Eliminates starvation Needs directional bit disk drive seek Throughput similar to S S T F More complex algorithm to Works well with light to moderate implement strategies discussed loads Increased overhead in this chapter. N-Step SCAN Easier to implement than SCAN The most recent requests wait © Cengage longer than with SCAN Learning 2018 C-SCAN/ Works well with moderate to May not be fair to recent C-LOOK heavy loads requests for high-numbered No directional bit tracks Small variance in service time More complex algorithm than N- C-Look doesn’t travel to unused step SCAN, causing more tracks overhead Mchoes/Flynn, Understanding Operating Systems, 8th Edition. © 2018 Cengage. All Rights Reserved. 58 May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Use Quizgecko on...
Browser
Browser