Full Transcript

Storage Overview Non-volatile memory can be viewed as part of the memory hierarchy system Or even as part of the I/O system I because it is invariably connected to the I/O buses and not to the main memory bus How to store? I magnetic disk I flash memory...

Storage Overview Non-volatile memory can be viewed as part of the memory hierarchy system Or even as part of the I/O system I because it is invariably connected to the I/O buses and not to the main memory bus How to store? I magnetic disk I flash memory 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 4/43 Storage (cont.) Magnetic Disks Purpose I non-volatile storage I big, cheap, and slow1 I lowest level in the memory hierarchy system It is based on a rotating disk covered with a magnetic surface Use a read/write head per surface to access Illustration from information http://www.btdersleri.com/ders/Harddiskler In fact, disks may have more than one platter Also used in the “remote past” as device for physical data transport, e.g., floppy disks 1 when compared to flash memory 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 5/43 Storage (cont.) Magnetic Disks Cylinder-head-sector addressing. Illustration from https://www.partitionwizard.com/help/what-is-chs.html 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 6/43 Storage (cont.) Magnetic Disks Example of tracks and sectors numbers I 5k ∼ 30k tracks per surface, i.e., top and bottom I 100 ∼ 500 sectors per track I sector is the smallest unit that can be addressed Generally, all tracks had the same number of sectors I then, sectors have different physical sizes Currently, disks have tracks with different numbers of sectors to get disks with bigger storage capacity There are less sectors in I platters with the same density the inner tracks, then given an increased total I logical block addressing - LBA, instead of CHS number of sectors; and finally, bigger disk capacity 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 7/43 Storage (cont.) Magnetic Disks Cylinder I all the concentric tracks under the r/w head at a given point on all surfaces, i.e., cylindrical intersection Read/write process steps 1. seek time – position the arm over the proper track 2. rotational latency – wait for the desired sector to rotate under the r/w head 3. transfer time – transfer a block of bits, i.e., sector, under the r/w head 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 8/43 Storage (cont.) Magnetic Disks Performance Seek time2 I between 5 to 12 ms Sum of the time for all possible seeks AST = (1) Total number of possible seeks Due to locality wrt disk reference, actual seek time can be only 25 to 33% of the time disclosed by manufacturers 2 Average seek time as reported by the industry 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 9/43 Storage (cont.) Magnetic Disks Performance Rotational latency I 3,600 to 15,000 RPM, i.e., 16 ms to 4 ms per revolution I average rotational latency - ARL I 8 ms to 2 ms, i.e., average latency to desired information is halfway around the disk I common values are 5,400; 7,200 RPM ARL = 0.5 × RotationPeriod (2) 60 RotationPeriod = [seconds] (3) x [RPM] 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 10/43 Storage (cont.) Magnetic Disks Performance Transfer time depends on I transfer size per sector, e.g., 1 KiB, 4 KiB I rotation speed, e.g., 3600 to 15000 RPM I recording density: bits/inch I disk diameter: 1.0 to 3.5 inches I typical transfer rate: 3 to 65 MiB/s 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 11/43 Storage (cont.) Magnetic Disks Evolution Increase in the number of bits per square inch, i.e., density A steep price reduction from US$ 100,000/GB (1984) to less than US$ 0.5/GB (2012) Considerable increase in RPM, from 3600 RPM (’80s) to close to 15000 RPM (2000’s) I did not continue to increase due to problems with rotation high speed 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 12/43 Storage (cont.) Magnetic Disks Evolution Disk access time - DAT DAT = SeekTime + RotationalLatency + TransferTime + ControllerTime + QueuingDelay (4) 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 13/43 Storage (cont.) RAID Systems Disks differ from other levels of memory hierarchy because they are non-volatile They are also the lowest level, i.e., there is no other level to fetch on in the computer if the data is not on the disk Therefore, disks should not fail, but all hardware fail 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 14/43 Storage (cont.) RAID Systems Redundant array of independent disks - RAID3 I multiple simultaneous accesses I data are spread into multiple disks I stripping sequential data is logically allocated on separate disks to increase performance I mirroring data is copied to identical disks, i.e., mirrored, to increase availability 3 Formerly introduced as “inexpensive” 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 15/43 Storage (cont.) RAID Systems Main characteristics I latency is not necessarily reduced I availability is enhanced through the addition of redundant disks I lost information can be rebuilt through redundant information 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 16/43 Storage (cont.) RAID Systems Reliability vs Availability Reliability I less, i.e., more disks, greater fail probability Availability I greater, i.e., failures do not necessarily lead to unavailability 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 17/43 Storage (cont.) RAID Systems RAID standard levels summary Level Description RAID 0 Not redundant, but more efficient. Does not recover from failures. Striped/interleaved volumes RAID 1 Redundant and able to recover from one failure. Uses twice as many RAID 0 disks. Mirror/copy volume RAID 2 Applies memory-style error-correcting codes - ECC to disks. No com- mercial use RAID 3 Bit-interleaved parity. One parity/check disk for multiple data disks, able to recover from one failure RAID 4 Block-interleaved parity. One check disk for multiple data disks, able to recover from one failure RAID 5 Distributed block-interleaved parity. Able to recover from one failure RAID 6 RAID 5 extension, another parity block. Able to recover from double faults 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 18/43 Storage (cont.) RAID Systems Illustrations from https://en.wikipedia.org/wiki/Standard_RAID_levels 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 19/43 Storage (cont.) RAID Systems Let’s consider a case with two drives, in a 3-drive RAID 5 array Should any of the 3 drives fail, contents can be restored using the same XOR function Data from D1 = 1001 1001 (drive 1) Data from D2 = 1000 1100 (drive 2) If drive 1 fails, D1 contents can be restored by D2 = 1000 1100 The Boolean XOR function is used to compute P = 0001 0101 the parity of D1 and D2 XOR D1 = 1001 1001 P = 0001 0101, is written in the drive 3 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 20/43 Storage (cont.) RAID Systems RAID 6 details – row-diagonal parity I each diagonal does not cover (i.e., leaves out) one disk I even if two disks fail, it will be possible to recover a block I with one block recovered, the second one can be recovered through the row I needs just p − 1 diagonals to protect the p disks RAID 6 (p = 5); p + 1 disks total; p − 1 disks have data. Row parity disk is just like in RAID 4. Each block of the diagonal parity disk contains the parity of the blocks in the same diagonal 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 21/43 Storage (cont.) Flash Memory Technology similar to traditional EEPROM4 , higher memory capacity per chip Low-power consumption Read access time slower than DRAM, but much faster than disks I 256-byte transfer of flash would take around 6.5 µs, and 1000× more on disks (2010) I wrt writing, DRAM can be 10 to 100× faster Stores require “deletion” of data I first a memory block is erased, and then new data is written I i.e., erase-before-write 4 Electrically erasable programmable read-only memory 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 22/43 Storage (cont.) Flash Memory NOR- and NAND-based flash memories The first flash memories, NOR, was a direct competitor of the traditional EEPROM I randomly addressable I typically used in the BIOS After a while, NAND flash memories have emerged I offering higher storage density I but can only be read in blocks as it eliminates the wiring required for random access I much cheaper per gigabyte and much more common than NOR flash 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 23/43 Storage (cont.) Flash Memory 2010 – $2/GB for flash; $40/GB for SDRAM, and $0.09/GB for disks 2016 – $0.3/GB for flash; $7/GB for SDRAM, and $0.06/GB for disks There is wear-out of flash wrt writings, limited to 100K and 1M recordings Life cycle can be expanded through the uniform distribution of writes through blocks Floppy disks were extinguished, and so hard drives in mobile systems, thanks to solid state disks - SSD 1st semester, 2024 Loubach CSC-25 High Performance Architectures ITA 24/43

Use Quizgecko on...
Browser
Browser