🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

csc25-lecture-notes-163-177.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

ManageableSatire

Uploaded by ManageableSatire

Tags

computer science storage systems io systems

Full Transcript

Chapter 9 Storage and I/O Systems Storage Overview The non-volatile memory can be viewed as part of the memory hiera...

Chapter 9 Storage and I/O Systems Storage Overview The non-volatile memory can be viewed as part of the memory hierarchy system, or even as part of the I/O system. This is because it is invariably connected to the I/O buses and not to the main memory bus, as illustrated in Fig. 4.4. In this case, the main choices to store data are magnetic disks and flash memory. Magnetic Disks The purpose of magnetic disks is to serve as a non-volatile (persistent) storage. Usually, disks are big, cheap, and slow1. They reside in the lowest level of the memory hierarchy system (??). This device is based on a rotating disk covered with a magnetic surface. The disk uses a read-write head per surface to access information. In fact, disks may have more than one platter, as shown in Figs. 9.1 and 9.2. Figure 9.1: A sector, cluster, track and cylinder illustration. Fig. source: http://www.btdersleri.com/ders/Harddiskler In the past, disks were also used as devices for physical data transportation, e.g., floppy disks2. 1 When compared to flash memory. 2 https://en.wikipedia.org/wiki/Floppy_disk 157 9 Storage and I/O Systems Figure 9.2: Cylinder-head-sector (CHS) addressing. Here, the arm assembly moves to select a given cylinder. Then, a read-write head is selected (this is related to a platter). As the spindle rotates, a sector’s information is finally accessed by the selected read-write head. Fig. source: https://www.partitionwizard.com/help/what-is-chs.html Some tracks and sectors numbers examples. There are about 5k to 30k tracks per disk surface, i.e., top and bottom, and 100 to 500 sectors per track. The sector is the smallest unit that can be addressed in a disk, as shown in Fig. 9.3. Earlier, all tracks had the same number of sectors. Then, sectors had different physical sizes. Thus, the inner sectors (smaller areas) could have the same capacity as the outer sectors (bigger areas) by having a different density arrangement. Currently, disks have tracks with different numbers of sectors to get disks with bigger storage capacity. Platters have the same density. And, disks use logical block addressing - LBA, instead of CHS. Figure 9.3: There are different number of sectors in the inner and outer tracks, then given an increased total number of sectors, and finally, a bigger disk capacity, considering platters with a same density. A cylinder comprehends all the concentric tracks under the read-write head at a given point on all surfaces, i.e., the cylindrical intersection. The read/write process includes the following steps: 1. seek time – is the time to positioning the arm (as in Fig. 9.2) over the addressed track; 2. rotational latency – is the waiting time for the desired sector to rotate under the read-write head; and 3. transfer time – is the time taken to transfer a block of bits, i.e., sector, under the read-write head. 158 Storage Magnetic Disks Performance Seek Time The seek time3 is generally between 5 to 12 ms. This can be computed as in Eq. (9.1). Sum of the time for all possible seeks AST = (9.1) Total number of possible seeks Due to the locality with respect to the disk reference, the actual seek time can be only 25% to 33% of the time disclosed by manufacturers. Rotational Latency The rotational latency is about 3,600 to 15,000 RPM, i.e., 16 ms to 4 ms per revolution. However, this measurement is usually expressed as the average rotational latency - ARL, e.g., from 8 ms to 2 ms, i.e., the average latency to the desired information is halfway around the disk. This can be computed as in Eqs. (9.2) and (9.3). ARL = 0.5 × RotationP eriod (9.2) 60 RotationP eriod = [seconds] (9.3) x [RPM] Common values for disk rotational speed/latency x are 5,400 and 7,200 RPM. Transfer Time The transfer time depends on the: transfer size per sector, e.g., 1 KiB, 4 KiB; rotational speed, e.g., 3,600 to 15,000 RPM; and recording density (bits/inch), taken into account the disk diameter from 1.0 to 3.5 inches, for example. Typical transfer rates are from 3 to 65 MiB/s. Magnetic Disks Evolution Magnetic disks evolved over the years. There was an increase in the number of bits per square inch, i.e., disk density, a steep price reduction from US$ 100,000/GiB (1984) to less than US$ 0.5/GiB (2012), and a considerable increase in the RPM, from 3,600 RPM (in the ’80s) to close to 15,000 RPM (2000’s). The latter does not continue to increase due to problems with high rotational speed. Eq. (9.4) is used to computed the disk access time - DAT. DAT = SeekT ime + RotationalLatency + T ransf erT ime + ControllerT ime + QueuingDelay (9.4) RAID Systems Disks differ from the other levels in the memory hierarchy because they are non-volatile. They are also the lowest level in the hierarchy, i.e., there is no other level to fetch on in the computer if the data is not in the disk4. 3 Average seek time - AST as reported by the industry. 4 Or in another device from the lowest memory hierarchy level, if one wants to be more accurate. 159 9 Storage and I/O Systems Therefore, disks should not fail, but all hardware fails at some point in time. In this sense, there is the redundant array of independent disks - RAID5. RAID allows for multiple simultaneous accesses since the data are spread into multiple disks. Two basic techniques are also used here. The first one is stripping, where sequential data is logically allocated on separate disks to increase performance. The second is mirroring, where data is copied to identical disks, i.e., mirrored, to increase the information availability. Considering the main characteristics of a RAID system, latency is not necessarily reduced, availability is enhanced through the addition of redundant disks, and lost information can be rebuilt through redundant data. Reliability vs. Availability In RAID, reliability becomes a problem considering the system will have less reliability, i.e., more disks then bring a greater fail probability. Conversely, availability is increased, i.e., failures do not necessarily lead to unavailability. Standard Levels Summary Table 9.1 presents a summary about the RAID levels. Fig. 9.4 illustrates the RAID 3, RAID 4, and RAID 5 systems. Table 9.1: RAID standard levels summary. Level Description RAID 0 This level does not offer redundancy, but it is more efficient. It does not recover from failures. RAID 0 has striped/interleaved volumes. RAID 1 This level is redundant and is able to recover from one failure. It uses twice as many RAID 0 disks. RAID 1 has mirrored/copy of volumes. RAID 2 This level applies a memory-style error-correcting codes - ECC to disks. No extensive commercial use. RAID 3 This level has bit-interleaved parity. One parity/check disk for multiple data disks, able to recover from one failure. As illustrated in Fig. 9.4. RAID 4 This level has block-interleaved parity. One check disk for multiple data disks, able to recover from one failure. As illustrated in Fig. 9.4. RAID 5 This level has distributed block-interleaved parity. It is able to recover from one failure. As illustrated in Fig. 9.4. RAID 6 This level is a RAID 5 extension, considering another parity block. It is able to recover from double failures. Illustrated in Fig. 9.5. 5 It was formerly introduced as “inexpensive”, instead of “independent”. 160 Storage Figure 9.4: RAID examples. In RAID 3 (bit-interleaved parity), there is one disk (Disk 3, in this example) especially designated to be the check disk for multiple disks (other 3 disks as illustrated here). In RAID 4 (block-interleaved parity), there is also one disk specific to the parity information (Disk 3). Finally, in RAID 5, there is no specific parity disk as this is a distributed block-interleaved parity, i.e., parity information is distributed among the disks in the system. Fig. source: https://en.wikipedia.org/wiki/Standard_RAID_levels Example Simple example. Let’s consider a case with two drives and data, in a 3-drive RAID 5 array. Data from D1 = 1001 1001 (drive 1) Data from D2 = 1000 1100 (drive 2) The Boolean XOR function is used to compute the parity of D1 and D2, as follows: Parity P = 0001 0101, is written in the drive 3. Should any of the 3 drives fail, the contents can be restored using the same XOR function. If drive 1 fails, D1 contents can be restored by doing the next procedure: D2 = 1000 1100 P = 0001 0101 XOR D1 = 1001 1001 RAID 6 Details In RAID 6, there is the row-diagonal parity. In this case, each diagonal does not cover one disk, i.e., one is left out. Even if two disks fail, it will be possible to recover a block. Considering that one block was already recovered, the second one can be recovered through the row parity, as in RAID 4. Finally, this scheme needs just p − 1 diagonals to protect the p disks. Fig. 9.5 illustrates this concept. Figure 9.5: RAID 6 (p = 5); p + 1 disks in total; p − 1 disks have data. Row parity disk is just like in RAID 4. Each block of the diagonal parity disk contains the parity of the blocks in the same diagonal. 161 9 Storage and I/O Systems How RAID 6 works, an example. Let’s consider the Fig. 9.5 and assume that data disks number 1 and 3 fail. Standard RAID recovery that uses the first row (row parity) does not help here, because two data blocks, from disk 1 and disk 3, are with a problem. The way around is to perform a recovery procedure based on the diagonal 0 since it only has problems with the data block related to disk 3. Notice that the diagonal 0 does not involve disk 1. Given this, the row-diagonal approach starts by recovering one of the four blocks within the failed disk (disk 1), since a data block from disk 3 was already recovered by using the diagonal parity. Next, the diagonal 2 will be used since it does not involve disk 3, but involves the other failed disk, i.e., disk 1. Thus, a block from disk 1 is recovery by using the diagonal parity. Therefore, when data from these blocks are recovered, the standard RAID recovery approach can be used to recover two more blocks using RAID 4 (row parity). This will allow two more diagonals to be recovered. This loop continues until all the blocks are finally recovered. Flash Memory The flash memory technology is similar to the traditional EEPROM6 , but with higher memory capacity per chip and low-power consumption. The read access time is slower than DRAM, but much faster than disks. A 256-Byte transfer of flash would take around 6.5 µs, and 1,000 times more on disks, based on the 2010 numbers. Regarding the writing process, DRAM can be from 10 to 100 times faster. The stores in flash require the deletion of data. First, a memory block is erased, and then the new data is written. NOR- and NAND-based Flash Memories The first flash memories, NOR, were a direct competitor of the traditional EEPROM. They were randomly addressable and typically used in the computer’s basic input/output system - BIOS. After a while, NAND flash memories have emerged. They offered higher storage density, but can only be read in blocks as it eliminates the wiring required for random accesses. NAND is much cheaper per gigabyte and much more common than NOR flash. In 2010, the price was $2/GiB for flash, $40/GiB for SDRAM, and $0.09/GiB for disks. In 2016, $0.3/GiB for flash; $7/GiB for SDRAM, and $0.06/GiB for disks. There is a wear-out of flash with respect to the writings, limited to about 100 thousands to one million recordings, depending on the manufacturer. The memory life cycle can be expanded through the uniform distribution of writes through blocks. Floppy disks were now extinguished, and so hard drives in mobile systems, thanks to the solid-state disks - SSD. Clusters (I/O Servers) Evaluation Overview Next, the performance, cost, and dependability of a system designed to provide high I/O performance are evaluated. 6 Electrically erasable programmable read-only memory. 162 Clusters (I/O Servers) Evaluation The reference is a VME T-80 rack. This was used in the Internet Archive, a project started in 1996. It aims to make the historical record of the Internet over time. The typical cluster building blocks are composed of servers, e.g., storage nodes, Ethernet switches, and racks. Data and Assumptions For this evaluation, let’s consider the rack VME T-80, Capricornian Systems7 , and the PetaBox GB2000. The storage node PetaBox GB2000, has 4× 500 GiB parallel advanced technology attachment - PATA disk drives; 512 MiB of DDR266 DRAM; 1× 10/100/1000 Ethernet interface; 1 GHz C3 processor from VIA, 80x86 instruction set; and it dissipates around 80 Watts in typical configurations. The 40× GB2000s fit in a standard VME rack, giving a total of 80 TB of raw capacity. The nodes are connected together with a 48-port 10/100/1000 switch, dissipating around 3 kW. The limit is usually 10 kW per rack. The cost and other information considered in the evaluation are given as follows. a $500 for: – processor, performance of 1,000 MIPS8 ; – DRAM; – ATA disk controller; and – power supply, fans, and enclosure. a $375 ×4 for: – 7,200 RPM PATA drives that holds 500 GiB, considering: – average seek time of 8.5 ms; – transfers at 50 MiB/sec from the disk; and – PATA link speed is 133 MiB/sec. a $3,000 for: – 48-port 10/100/1000 Ethernet switch; and – all cables for a rack. the ATA controller adds 0.1 ms of overhead to perform a disk I/O; the operating system uses 50,000 CPU instructions for a disk I/O; the network protocol stacks use 100,000 CPU instructions, and transmits a data block between the cluster and the external world; and the average I/O size is: – 16 KiB for accesses to the historical record; and – 50 KiB when collecting a new snapshot. Performance Evaluation Let’s evaluate the cost per I/O per second - IOPS of the 80 TB rack, by assuming the following. Every disk’s I/O requires an average seek and average rotational delay, the workload is evenly divided among all disks, all devices can be used at 100% of their capacity, the system is limited only by the weakest link, and it can operate that link at 100% utilization. 7 Data from 2006. 8 Millions of instructions per second. 163 9 Storage and I/O Systems Let’s compute the performance for both average I/O sizes, i.e., 16 and 50 KiB, as stated in Chapter 9. Also, remember that the I/O performance is limited by the weakest link in the chain. The evaluation includes the maximum performance of each link in the I/O chain including: 1. CPU, main memory, and I/O bus of one GB2000; 2. ATA controllers, disks; and 3. network switch. IOPS : CPU, main memory, and I/O bus The maximum CPU IOPS is given by: 1, 000 MIPS CPU IOPSMAX = (9.5) 50, 000 instructions per I/O + 100, 000 instructions per message = 6, 667 (9.6) The CPU I/O performance is determined by the CPU speed, along with the number of instructions to perform a disk I/O and to send it over the network. The maximum main memory IOPS is given by: 266 × 8 MainMemory IOPSMAX = ≈ 133, 000 (for the 16 KiB I/O) (9.7) 16 KiB per I/O 266 × 8 = ≈ 42, 500 (for the 50 KiB I/O) (9.8) 50 KiB per I/O The maximum performance of the memory system is determined by the memory bandwidth and the size of the I/O transfers. The maximum I/O bus IOPS is given by: 133 MiB/s IO Bus IOPSMAX = ≈ 8, 300 (for the 16 KiB I/O) (9.9) 16 KiB per I/O 133 MiB/s = ≈ 2, 700 (for the 50 KiB I/O) (9.10) 50 KiB per I/O The PATA link performance is limited by the bandwidth and the size of the I/O transfers. Considering that each storage node has two buses, the I/O bus limits the maximum performance to less than or equal 18,600 for 16 KiB blocks; and to less than or equal 5,400 for 50 KiB blocks. ATA controllers and disks And now, the next link in the I/O chain, i.e., the ATA controllers, are computed as follows. 16 KiB PATATranferTime = ≈ 0.1 ms (for the 16 KiB I/O) (9.11) 133 MiB/s 50 KiB = ≈ 0.4 ms (for the 50 KiB I/O) (9.12) 133 MiB/s 164 Clusters (I/O Servers) Evaluation The maximum ATA IOPS is given by: 1 ATA IOPSMAX = = 5, 000 (9.13) 0.1 ms + 0.1 ms controller overhead 1 = = 2, 000 (9.14) 0.4 ms + 0.1 ms controller overhead For the disks, the I/O time is given by: 0.5 × 60 16 KiB IOTime = 8.5 ms + + ≈ 13.0 ms (for the 16 KiB I/O) (9.15) 7200 RPM 50 MiB/s 0.5 × 60 50 KiB = 8.5 ms + + ≈ 13.7 ms (for the 50 KiB I/O) (9.16) 7200 RPM 50 MiB/s And the maximum disks IOPS are computed as follows. 1 Disk IOPSMAX = ≈ 77 (9.17) 13.0 ms 1 = ≈ 73 (9.18) 13.7 ms Or 292 ≤ Disk IOP SM AX ≤ 308 considering the four disks. Network switch The final link in the chain is the network connecting the computers to the out-side world, the Ethernet switch. In the same way as the other devices in the chain, the maximum IOPS is computed for both I/O sizes. 1, 000 Mbit Ethernet IOPSMAX per 1000 Mbit = = 7, 812 (for the 16 KiB I/O) (9.19) 16 KiB × 8 1, 000 Mbit = = 2, 500 (for the 50 KiB I/O) (9.20) 50 KiB × 8 Rack IOPS After all the math, what is the performance bottleneck of the storage node? Clearly, the disks. Then, it is used to compute the maximum rack IOPS, as follows. Rack IOP S = 40 × 308 = 12, 320 (9.21) = 40 × 292 = 11, 680 (9.22) The network switch would be the bottleneck if it could not support: 12,320 × 16 KiB × 8 = 1.6 Gbits/s for the 16 KiB blocks, and 11,680 × 50 KiB × 8 = 4.7 Gbits/s for the 50 KiB blocks. Cost Evaluation The rack cost is given by Eq. (9.23). RackTOTAL$ = 40 × ($500 + (4 × $375)) + $3, 000 + $1, 500RACK = 84, 500 (9.23) 165 9 Storage and I/O Systems Some statistics from this scenario: the disks represent almost 70% of the total cost; the cost per terabyte is almost $1,000 (about a factor of 10 to 15 better than a storage cluster from the prior book edition in 2001); the cost per IOPS is about $7. Dependability Evaluation Dependability is a measure considering the accomplishment of a faultless service. This is generally given as the mean time to failure - MTTF. The availability is a measure of a service performance without interruptions, given as the mean time to repair - MTTR. Then the availability can be computed as in Eq. (9.24). MTTF Availability = (9.24) MTTF + MTTR The mean time between failure - MTBF is given by Eq. (9.25). MTBF = MTTF + MTTR (9.25) And, the failure rate is given by Eq. (9.26). 1 (9.26) MTTF The resulting mean time to fail of the rack, considering the next assumptions with respect to the MTTF, is given as in Eqs. (9.27) and (9.28). 1. 40× CPU/memory/enclosure = 1,000,000 hours 2. 40 × 4 PATA Disk = 125,000 hours 3. 40× PATA controller = 500,000 hours 4. 1× Ethernet Switch = 500,000 hours 5. 40× power supply = 200,000 hours 6. 40× fan = 200,000 hours 7. 40 × 2 PATA cable = 1,000,000 hours (one cable per 2 disks) 40 160 40 + 1 40 + 40 80 1882 FailureRate = + + + + = (9.27) 1 × 106 125 × 103 500 × 103 200 × 103 1 × 106 1 × 106 1 1 × 106 MTTF = = ≈ 531 hours (22 days, 3 hours) (9.28) FailureRate 1882 Buses Overview Transfer rates in I/O devices can range from 0.0001 Mbps e.g., keyboard, to 800 and 8,000 Mbps e.g., graphics display. Fig. 9.6 illustrates the I/O bus connecting a couple of devices to the processor. 166 Buses Figure 9.6: System, i.e., memory and I/O, bus simplified view. Definitions A bus is a shared communication link to carry address, data, and control signals. It is basically a set of wires used to connect multiple subsystems. A bus is a basic tool for putting together large and complex systems. Fig. 9.7 shows a memory bus and an I/O bus. Figure 9.7: A simple example of a bus connecting memory and I/O devices to the processor. The address bus (or address lines) identifies the source of a data flow. The address bus bandwidth gives the maximum addressing capacity of a device. The data bus (or data lines) carries data or instructions, i.e., it does not matter on that level of abstraction. This bus is typically bidirectional, and its bandwidth is determinant to performance. The control bus (or control lines) is responsible for handling control signals such as read/write, interruptions, and also the bus clock. Pros & Cons A considerable advantage of buses is related to its versatility. On it, new devices can be easily integrated to the system, and peripherals can be moved around different computers with the same bus standard. Another point in favor of buses is the low cost they have. A bus is basically a single set of wires shared in different ways, following different standards according to the requirements of an application (Fig. 9.8). However, when it comes to communication, the bus becomes a bottleneck in the system. The bus bandwidth can limit the maximum I/O throughput. Moreover, the maximum bus speed is largely limited by the bus length and number of devices on the bus. Besides, a bus needs to support a range of devices with widely differences in latencies and data transfer rates. 167 9 Storage and I/O Systems Figure 9.8: A bus: single set of wires connecting different devices together. Bus Design Overview Both bus speed and bandwidth are greatly impacted by four main points: 1. bus width; 2. bus clocking scheme; 3. operation; and 4. arbitration method. Bus Width Generally, the number of address lines determines the size of the addressable memory. The greater the number of lines, more wires and larger connectors will be necessary. And, in this case, the hardware becomes more expensive. Some examples of processors are the 8088 with 20 address lines, and the 80286 with four more lines, followed by the 80386 with 8 lines in addition. The trend to increase the bus width to increase bus capacity creates physical connection problems. Thus, designers often make multiplexing of data and addresses in different phases/time to reduce the number of lines. But this also reduces the performance of the bus. Bus Clocking Scheme Regarding the clocking scheme, buses can be either synchronous or asynchronous. The synchronous bus includes a clock in the control lines. This is based on a fixed protocol for communication, with respect to the clock. An advantage is that it involves very little logic and can run fast. A disadvantage is that all devices on the bus must run at the same clock rate. Finally, to avoid clock skew, buses cannot be long if they are fast. The other option is the asynchronous bus which is not clocked. It can accommodate a wide devices’ range and can be lengthened without caring about the clock skew issue. However, an asynchronous bus requires a handshake protocol. Operation Buses often consider a master device in control of a couple of slave devices. Fig. 9.9 illustrates this concept showing a unidirectional control bus (from master to slave), and a bidirectional data bus between them. There is a two-part bus transaction. One is related to the request, where the master issues a command and an address to the slave. And the other is the action, comprising the actual command execution, e.g., transferring the data. The master starts the bus transaction by issuing the requests to the slave. Then, the slave, responds to the master by sending or receiving data, accordingly. 168 Bus Design Figure 9.9: A simple example of a master and slave bus scheme. Obtaining Access to the Bus How is the bus reserved by a device that wants to use it? The chaos is avoided by using a master-slave scheme. In this case, only the bus master controls the access to the bus. It starts and handles all bus requests. The slave just responds to the read or write requests. In the simplest system, the processor is the only bus master. All bus requests must be controlled by the processor. But this is a big drawback, i.e., processor is involved in everything. In this aspect, it is possible to use an arbitration method to minimize the negative impacts in the system. Arbitration Method In the arbitration, it is possible to accommodate multiple bus masters in a same bus. However, with multiple bus masters it is needed to define means of assuring that only one device will be selected as master at a time. This given method must consider the priority among devices, and the fairness, i.e., even the lowest priority device must operate. Next, four possible arbitration classes are enumerated. 1. a distributed arbitration by self-selection, i.e., each device that waits for the bus places its own code, i.e., indicating its identity; 2. a distributed arbitration by collision detection, e.g., Ethernet; 3. an authorization given in sequence, e.g., daisy chain; and 4. an authorization given in a central manner, e.g., centralized arbitration. Daisy Chain The daisy chain scheme is considered simple, but cannot assure fairness. In this case, a low-priority device may be locked out forever and never get access to the bus. The use of daisy chain grant signal also impacts bus speed. Here, considering the devices are serialized (connected sequentially), the propagation delay tends to increase as it goes farther from the bus arbiter. Fig. 9.10 illustrates this concept. Centralized Arbitration with a Bus Arbiter The centralized arbitration with a bus arbiter considers that the arbiter will handle all the requests and give access to the devices according to their priorities. This concept is illustrated in Fig. 9.11. 169 9 Storage and I/O Systems Figure 9.10: The bus arbiter sends the grant signal to the highest priority device. When that device finishes its computation, it then sends the grant signal to the next priority level device following the sequence. This goes until the lowest priority device is served and has access to the request and release signals. A possible problem is when a device fails and never sends the grant signal back to the chain. Figure 9.11: The bus arbiter receives all devices’ requests (ReqA, ReqB, and ReqC) to give the grant signals (GrantA, GrantB, and GrantC) to them following a fixed priority-based policy. The signal/timing diagram (in the bottom) depicts the internal operation of the arbiter. There, it shows that the signal ReqA is received together with ReqB. As the first has the highest priority, the signal GrantA is given from the arbiter so that DeviceA is served. When the signal ReqA becomes inactive (meaning the device no longer needs the bus), GrantA is removed and GrantB is given. Examples of Common Buses Some examples of common buses very well-known and used in the consumer and aeronautical industries are mentioned next. Peripheral Component Interconnect - PCI Express (PCI-e) The PCI-e was created by Intell, Dell, HP, and IBM. It connects HDD, SSD, Ethernet, graphics, and other cards in personal computers. The PCI-e is based on point-to-point topology, considering separate serial links connecting all devices. PCI-e replaces other standards, e.g., Accelerated Graphics Port - AGP, PCI, and PCI-eXtended. InfiniBand - IB The InfiniBand was originated by Compaq, Dell, Hewlett-Packard, IBM, Intel, Microsoft, and Sun. It is typically used in clusters and racks. InfiniBand has a very high throughput and very low latency. 170 Bus Design MIL-STD-1553 This avionic data bus is mainly used with military avionics, and also in spacecraft, as a dual technology. The bus controller handles multiple remote terminals connected with redundant links. ARINC 429 ARINC stands for Aeronautical Radio INC. The ARINC 429 is the predominant avionics data bus used on most commercial aircraft. There, a pair of wires accommodates one transmitter and up to 20 receivers. Avionics Full-Duplex Switched Ethernet - AFDX The AFDX is an implementation of deterministic Ethernet, defined by ARINC to address real-time issues. 171

Use Quizgecko on...
Browser
Browser