Computer Networks: A Systems Approach, Chapter 6 PDF

Document Details

AdoringAntigorite7463

Uploaded by AdoringAntigorite7463

Indiana University–Purdue University Indianapolis

2010

Larry L. Peterson and Bruce S. Davie

Tags

computer networks network protocols congestion control resource allocation

Summary

This document is a chapter from a textbook on computer networks focusing on congestion control and resource allocation. It discusses various aspects of these topics and illustrates them with diagrams and concepts.

Full Transcript

Computer Networks: A Systems Approach, 5e Larry L. Peterson and Bruce S. Davie Chapter 6 Congestion Control and Resource Allocation Copyright © 2010, Elsevier Inc. All rights Reserved 1 ...

Computer Networks: A Systems Approach, 5e Larry L. Peterson and Bruce S. Davie Chapter 6 Congestion Control and Resource Allocation Copyright © 2010, Elsevier Inc. All rights Reserved 1 Chapter 6 Problem We have seen enough layers of protocol hierarchy to 1 2 3 understand how data can be transferred among processes across heterogeneous networks Problem How to effectively and fairly allocate resources among a collection of competing users? 2 Chapter 6 Chapter Outline Issues in Resource Allocation 1 2 3 Queuing Disciplines TCP Congestion Control Congestion Avoidance Mechanism Quality of Service 3 Chapter 6 1 2 3 RESOUCE ALLOCATION 4 Chapter 6 Issues in Resource Allocation Resources 1 2 3 Bandwidth of the links Buffers at the routers and switches Packets contend at a router for the use of a link, with each contending packet placed in a queue waiting for its turn to be transmitted over the link 5 Chapter 6 Issues in Resource Allocation When too many packets are contending for the same link 1 2 3 The queue overflows Packets get dropped Network is congested! Network should provide a congestion control mechanism to deal with such a situation 6 Chapter 6 Issues in Resource Allocation Congestion control and Resource Allocation 1 2 3 Two sides of the same coin If the network takes active role in allocating resources The congestion may be avoided No need for congestion control 7 Chapter 6 Issues in Resource Allocation Allocating resources with any precision is difficult 1 2 3 Resources are distributed throughout the network On the other hand, we can always let the sources send as much data as they want Then recover from the congestion when it occurs Easier approach but it can be disruptive because many packets may be discarded by the network before congestions can be controlled 8 Chapter 6 Issues in Resource Allocation Congestion control and resource allocations involve both 1 2 3 hosts and network elements such as routers In network elements Various queuing disciplines can be used to control the order in which packets get transmitted and which packets get dropped At the hosts’ end The congestion control mechanism paces how fast sources are allowed to send packets 9 Chapter 6 Issues in Resource Allocation Network Model 1 2 3 Packet Switched Network We consider resource allocation in a packet-switched network (or internet) consisting of multiple links and switches (or routers). In such an environment, a given source may have more than enough capacity on the immediate outgoing link to send a packet, but somewhere in the middle of a network, its packets encounter a link that is being used by many different traffic sources 10 Chapter 6 Issues in Resource Allocation Network Model 1 2 3 Packet Switched Network A potential bottleneck router. 11 Chapter 6 Issues in Resource Allocation Network Model 1 2 3 Connectionless Flows For much of our discussion, we assume that the network is essentially connectionless, with any connection-oriented service implemented in the transport protocol that is running on the end hosts. We need to qualify the term “connectionless” because our classification of networks as being either connectionless or connection-oriented is a bit too restrictive; there is a gray area in between. In particular, the assumption that all datagrams are completely independent in a connectionless network is too strong. The datagrams are certainly switched independently, but it is usually the case that a stream of datagrams between a particular pair of hosts flows through a particular set of routers 12 Chapter 6 Issues in Resource Allocation Network Model 1 2 3 Connectionless Flows Multiple flows passing through a set of routers 13 Chapter 6 Issues in Resource Allocation Flow abstractions 1 2 3 host-to-host flow process-to-process flow Identical to a channel, but visible to the routers State information to maintain some state information for each flow to make resource allocation decisions Soft state vs. hard state Soft state: some indirect information to estimate flow state Hard state: info explicitly maintained by routers and hosts. 14 Chapter 6 Service Model Best-effort service model 1 2 3 All packets are treated equally. No guarantees or preferential to a specific flow. Quality of service model To support some kind of preferred service or guarantee. Bandwidth, delay, variance Eg) Video streaming 15 Chapter 6 Taxonomy, 1 Router-centric vs. Host-centric 1 2 3 In a router-centric, each router takes responsibility for deciding when packets are forwarded and selecting which packets are to dropped, for informing the hosts generating the network traffic how many packets they are allowed to send. In a host-centric, the end hosts observe the network conditions how many packets they are successfully getting through the network and adjust their behavior accordingly. Reservation-based vs. Feedback-based In a reservation-based, some entity (e.g., the end host) asks the network for a certain amount of capacity to be allocated for a flow. Each router then allocates enough resources (buffers and/or percentage of the link’s bandwidth) to satisfy this request. If the request cannot be satisfied at some router, then the router rejects the reservation. In a feedback-based, the end hosts begin sending data without first reserving any capacity and then adjust their sending rate according to the feedback they receive. This feedback can either be explicit (i.e., a congested router sends a “please slow down” message to the host) or it can be implicit (i.e., the end host adjusts its sending rate according to the externally observable behavior of the network, such as packet losses). 16 Chapter 6 Taxonomy, 2 Window-based vs. Rate-based 1 2 3 Window advertisement is used within the network to reserve buffer space. Control sender’s behavior using a rate, how many bit per second the receiver or network is able to absorb. 17 Chapter 6 Evaluation Criteria Effective Resource Allocation 1 2 3 Two principal metrics of networking: throughput and delay. Higher throughput and shorter delay if possible. Greedy Approach: To allow as many packets into the network as possible To increase throughput and the utilization of all the links up to 100%. Problem) Increasing the number of packets in the network also increases the length of the queues at each router. The longer queues, in turn, mean packets are delayed longer in the network Negotiation b/w throughput and delay The ration throughput to delay evaluating the effectiveness of a resource allocation scheme. Power = Throughput/Delay 18 Chapter 6 Load vs. Power 1 2 3 Ratio of throughput to delay as a function of load 19 Chapter 6 Evaluation Criteria Fair Resource Allocation 1 2 3 What is the definition of “fairness”? For example, a reservation-based resource allocation scheme provides an explicit way to create controlled “unfairness”. might use reservations to enable a video stream to receive 1 Mbps across some link while a file transfer receives only 10 Kbps over the same link. Fairness of bandwidth Each flow receives an equal share of the bandwidth. But even in the absence of reservations, equal shares may not equate to fair shares. What is the fairness when a flow has different path? 20 Chapter 6 Fair Resource Allocation 1 2 3 One four-hop flow competing with three one-hop flows 21 Chapter 6 Fairness Quantification Raj Jain’s fairness index 1 2 3 Given a set of flow throughputs ( ), The fairness index always results in a number between 0 and 1, with 1 representing greatest fairness. 22 Chapter 6 1 2 3 23 QUEUING Chapter 6 Queuing Disciplines FIFO queuing. first-come-first-served (FCFS) queuing 1 2 3 The first packet that arrives at a router is the first packet to be transmitted For a finite buffer space at each router, if a packet arrives and the queue (buffer space) is full, then the router discards that packet Regardless which flow the packet belongs to or how important the packet is (called tail drop) Note that the tail drop and FIFO are two separable ideas. FIFO is a scheduling discipline—it determines the order in which packets are transmitted. Tail drop is a drop policy—it determines which packets get dropped Convoy effect Elephant and mice problem 24 Chapter 6 Queuing Disciplines 1 2 3 (a) FIFO queuing; (b) tail drop at a FIFO queue. 25 Chapter 6 Priority Queue To mark each packet with a priority 1 2 3 the mark could be carried, for example, in the IP header. Multiple FIFO queues, one for each priority class. The router always transmits packets out of the highest-priority queue if that queue is nonempty before moving on to the next priority queue. Within each priority, packets are still managed in a FIFO manner. Problem The high-priority queue can starve out all the other queues. A user should NOT be able to set his own packets to high priority in an uncontrolled way. 26 Chapter 6 Fair Queuing To provide equal opportunity to access to transmission 1 2 3 bandwidth Each user has its own logical queue Ideally, the network bandwidth is divided equally among the queues Practically, it is impossible to divide the bandwidth equally Packet-by-packet system: Fluid-flow system: queue 1 served first at rate 1; both packets served then queue 2 served at rate 1. at rate 1/2 Packet from queue 2 waiting 1 Both packets 1 Packet from complete service queue 2 at t=2 being served Packet from queue 1 being 0 t 0 2 t 1 2 1 served 27 Chapter 6 Fair Queuing, Round-robin To maintain a separate queue for each flow currently being 1 2 3 handled by the router. The router then services these queues in a sort of round-robin. Round-robin service of four flows at a router 28 Chapter 6 Fair Queuing, Challenges The packets being processed at a router are not necessarily 1 2 3 the same length. To truly allocate the bandwidth of the outgoing link in a fair manner, it is necessary to take packet length into consideration. For example, if a router is managing two flows, one with 1000-byte packets and the other with 500-byte packets (perhaps because of fragmentation upstream from this router), then a simple round-robin servicing of packets from each flow’s queue will give the first flow two thirds of the link’s bandwidth and the second flow only one-third of its bandwidth. 29 Chapter 6 Fair Queuing, Slicing Virtually, bit-by-bit round-robin 1 2 3 the router transmits a bit from flow 1, then a bit from flow 2, and so on. It is not feasible to interleave the bits from different packets. Simulation of the behavior First determining when a given packet would finish being transmitted if it were being sent using bit-by-bit round-robin, and then using this finishing time to sequence the packets for transmission. 30 Chapter 6 Fair Queuing Simple single flow: 1 2 3 : denote the length of packet : time when the router starts to transmit packet : time when router finishes transmitting packet Clearly, When do we start transmitting packet ? Depends on whether packet arrived before or after the router finishes transmitting packet for the flow 0Let denote the time that packet arrives at the router Then 31 Chapter 6 Fair Queuing Multiple Flows 1 2 3 Now for every flow, we calculate for each packet that arrives using our formula Next packet to transmit is always the packet that has the lowest timestamp The packet that should finish transmission before all others Notation: : arrival time of the -th packet to flow. : size of -th packet to flow. : finish time of -th packet to flow. : the number of rounds at time. The actual duration of a given round is proportional to the actual number of buffer. is a function of the actual number of buffers at time. 32 Chapter 6 Queuing Disciplines Fair Queuing 1 2 3 Example of fair queuing in action: (a) packets with earlier finishing times are sent first; (b) sending of a packet already in progress is completed 33 Chapter 6 Fair Queue, Reality 1 2 3 2 Fluid-flow system: Buffer 1 at t=0 both packets served 𝐴(1,1) = 0 at rate 1/2 Buffer 2 at t=0 1 𝐴(2,1) = 0 Packet from buffer 2 served at rate 1 t 0 2 3 Packet-by-packet Packet from buffer 2 waiting. fair queueing: buffer 2 served Finish tag for buffer 2 is 2. at rate 1 1 Packet from buffer 1 served at rate 1. Finish tag for buffer 1 is 1. t 0 1 2 3 34 Chapter 6 Finishing Tag for Multiple Fair Queue A packet of a flow arrives at an empty buffer, some other flow may 1 2 3 already have a packet in other queue waiting to be transmitted. Fair Queueing: If the packet arrives at an empty buffer 𝐹(𝑖, 𝑘) = 𝑅(𝐴(𝑖, 𝑘)) + 𝑃(𝑖, 𝑘) If the packet arrives at a nonempty buffer 𝐹 𝑖, 𝑘 = 𝐹 𝑖, 𝑘 − 1 + 𝑃 𝑖, 𝑘 Combined form 𝐹 𝑖, 𝑘 = max{𝐹 𝑖, 𝑘 − 1 , 𝑅 𝐴(𝑖, 𝑘) } + 𝑃 𝑖, 𝑘 Principle of Work Conserving The link is never left idle as long as there is at least one packet in the queue. If flow-1 is the only active flow, flow-1 can use the full link capacity. As soon as the other flows start sending, however, they will start to use their share. The capacity available to flow-1 will be reduced. Variation of Fair Queuing Weighted fair queuing (WFQ) – a weight assigned to each flow. 35 Chapter 6 1 2 3 TCP CONGESTION CONTROL 36 Chapter 6 TCP Congestion Control Late development 1 2 3 TCP congestion control was introduced into the Internet in the late 1980s by Van Jacobson, roughly eight years after the TCP/IP protocol stack had become operational. Suffering from congestion collapse: Hosts would send their packets as fast as the advertised window would allow. Congestion would occur at some router (causing packets to be dropped), and the hosts would time out and retransmit their packets, resulting in even more congestion Distributive approach Each source estimates how much capacity is available, so that it knows how many packets it can safely have in transit. Self Clocking The arrival of an ACK is used as a signal that one of its packets has left the network, and that it is safe to insert a new packet into the network without adding to the level of congestion. 37 Chapter 6 Vanilla TCP Congestion Control Congestion window state variable (cwnd): 1 2 3 Each TCP connection maintains the state variable To limit how much data it is allowed to have in transit at a given time. Congestion window vs. advertised window. congestion window: for network, self-computing advertise window: for recipient, informed by the recipient The maximum number of bytes in transit is now the minimum of the congestion window and the advertised window TCP’s effective window is revised as follows: MaxWindow = MIN(cwnd, awnd) EffectiveWindow = MaxWindow − (LastByteSent − LastByteAcked). 38 Chapter 6 TCP Congestion Control, Challenge How TCP comes to learn an appropriate value for cwnd. 1 2 3 Note that the advertised window (awnd) is sent by the receiving side of the connection, There is no one to send a suitable cwnd to the sending side of TCP. Solution: Self-Learning Based on the experience Decreasing it when the congestion level goes up. Increasing it when the congestion level goes down. 39 Chapter 6 TCP Congestion Control, AIMD How the source know the congestion level? 1 2 3 Solution: A packet is dropped due to congestion  a timeout occurs It is rare that a packet is dropped because of an error during transmission. (Really?) TCP interprets timeouts as a sign of congestion and reduces the transmitting rate. AIMD (Additive Increase Multiplicative Decrease) Each timeout occurs, the source sets cwnd to half of its previous value. 40 Chapter 6 TCP Congestion Control, MSS MSS, Maximum Segment Size 1 2 3 The largest data size that a network connection accepts. Typically in a range of 128-byte to MTU. congestion window state variable Always greater than or equal to 1 MSS. Increase Congestion Windows Every time the source successfully sends a congestion window’s worth of packets (successfully ACKed) It adds the equivalent of 1 MSS to cwnd. 41 Chapter 6 TCP Congestion Control Packets in transit during additive increase, with one packet 1 2 3 being added each RTT. Note that TCP does not wait for an entire window’s worth of ACKs to add 1 packet’s worth to the congestion window, but instead increments cwnd by a little for each ACK that arrives. 42 Chapter 6 TCP Congestion Control, Increment The congestion window increments for each ACK as follows : 1 2 3 Increment = MSS × (MSS/cwnd) cwnd += Increment Incrementing cwnd by an entire MSS bytes each RTT. Increment it by a fraction of MSS every time an ACK is received. Assuming that each ACK acknowledges the receipt of MSS bytes, then that fraction is MSS/cwnd. 43 Chapter 6 TCP Congestion Control, Slow Start AIMD 1 2 3 The additive increase mechanism is good when the source is operating close to the available capacity, but it takes too long to ramp up a connection when it starts from scratch. Slow Start, a new method to increase the congestion window rapidly from a cold start. effectively increases the congestion window exponentially, rather than linearly. the source starts out by setting the window to one packet. ( ) When the ACK for this packet arrives, adds 1 to cwnd and then sends two packets. ( ) Upon receiving the corresponding two ACKs, increments the window by 2—one for each ACK—and next sends four packets. ( ) That is , TCP effectively doubles the number of packets in transit every RTT. 44 Chapter 6 TCP Congestion Control, Slow Start Packets in transit during slow start. 1 2 3 source destination 45 Chapter 6 TCP Congestion Control, Slow Start Initiates Two different situations: 1 2 3 Cold Start At the very beginning of a connection, at which time the source has no idea how many packets it is going to be able to have in transit at a given time. In this situation, slow start continues to double CongestionWindow each RTT until there is a loss, at which time a timeout causes multiplicative decrease to divide CongestionWindow by 2. Connection Dead While waiting for a timeout to occur, At this moment, the source has sent as much data as the advertised window allows, and so it blocks while waiting for an ACK that will not arrive. Eventually, a timeout happens, but by this time there are no packets in transit, meaning that the source will receive no ACKs. The source will instead receive a single cumulative ACK that reopens the entire advertised window, The source then uses slow start to restart the flow of data but not sending a whole window’s worth of data on the network all at once. 46 Chapter 6 TCP Congestion Control, Congestion Threshold The last success congestion window (“target” congestion 1 2 3 window) The value of cwnd prior to the last packet loss, divided by 2 as a result of the loss. Slow start is used to rapidly increase the sending rate up to the target value, and then additive increase is used beyond this point. TCP memorizes the target window, called ssthresh. equal to the cwnd value that results from multiplicative decrease. The variable cwnd is then reset to one packet, and it is incremented by one packet for every ACK received until it reaches. Effectively, the variable doubles for one RTT. 47 Chapter 6 TCP Congestion Control, Implementation 1 2 3 { u_int cw = state->CongestionWindow; u_int incr = state->maxseg; if (cw > state->CongestionThreshold) incr = incr * incr / cw; state->CongestionWindow = min(cw + incr,TCP_MAXWIN); } where state represents the state of a particular TCP connection TCP_MAXWIN defines an upper bound on how large the congestion window is allowed to grow. 48 Chapter 6 TCP Congestion Control, Trace 1 2 3 Behavior of TCP congestion control. Colored line = value of CongestionWindow over time; solid bullets at top of graph = timeouts; hash marks at top of graph = time when each packet is transmitted; vertical bars = time when a packet that was eventually retransmitted was first transmitted. 49 Chapter 6 TCP Congestion Control The Vanilla TCP Congestion Control 1 2 3 the coarse-grained implementation of TCP timeouts led to long time while waiting for a timer to expire. Improvement #1: Fast retransmit (TCP Tahoe) A heuristic that sometimes triggers the retransmission of a dropped packet sooner than the regular timeout mechanism. Every time a data packet arrives at the receiving side, the receiver responds with an acknowledgment, even if this sequence number has already been acknowledged. When a packet arrives out of order TCP cannot yet acknowledge the data the packet contains because earlier data has not yet arrived TCP resends the same acknowledgment (called a duplicate ACK) it sent the last time. When the sender sees a duplicate ACK, it knows that the other side must have received a packet out of order, which suggests that an earlier packet might have been lost. The earlier packet has only been delayed rather than lost, the sender waits until it sees some number of duplicate ACKs and then retransmits the missing packet. In practice, TCP waits until it has seen three duplicate ACKs before retransmitting the packet. 50 Chapter 6 TCP Congestion Control Improvement #2: Fast Recovery (TCP Reno) 1 2 3 When the fast retransmit mechanism signals congestion, the sender reduce the congestion window size to half of the current value, and resume to additive increase (but not multiplicative). 51 Chapter 6 TCP Congestion Control Fast Retransmit and Fast Recovery 1 2 3 Trace of TCP with fast retransmit. Colored line = CongestionWindow; solid bullet = timeout; hash marks = time when each packet is transmitted; vertical bars = time when a packet that was eventually retransmitted was first transmitted. 52 Chapter 6 1 2 3 CONGESTION AVOIDANCE 53 Chapter 6 Congestion Avoidance Mechanism Strategy against Congestion 1 2 3 Reactive: to control congestion once it happens (as in TCP) TCP repeatedly increases the load on the network. When congestion occurs (loss of packets), it backs off from this point. Proactive: trying to avoid congestion in the first place. not yet been widely adopted in TCP/IP. popular in other connection-oriented technologies such as ATM. to predict when congestion is “about” to happen and then to reduce the data rate before packets start being discarded. 54 Chapter 6 DEC Bit, A Congestion Avoidance Mechanism Developed for use on a connectionless network with a 1 2 3 connection-oriented transport protocol such as TCP/IP. Collaboration between routers and end nodes. Each router monitors the load and explicitly notifies the end nodes when congestion is about to occur. by setting a binary congestion bit (DECbit) in the packets that flow through the router. The destination host copies this congestion bit into the ACK it sends back to the source. Upon receiving the ACK with DECbit, the source adjusts its sending rate so as to avoid congestion 55 Chapter 6 Single Congestion Bit DECbit 1 2 3 A router sets this bit in a packet if its average queue length is greater than or equal to 1 at the time the packet arrives. This average queue length is measured over a time interval that spans the last busy+idle cycle, plus the current busy cycle. ie. include at least a pair of complete busy and idle cycle. 56 Chapter 6 DEC Bit The source records how many packets containing the 1 2 3 congestion bit ON, and its ratio. If the ratio 50%, the source decreases its congestion window to 0.875 times the previous value. 57 Chapter 6 Random Early Detection (RED) RED can be used with TCP 1 2 3 TCP detects congestion by means of timeouts, or Some other means of detecting packet loss such as duplicate ACKs. How-It-Works Each router monitors its own queue length. When it detects that congestion is imminent, to notify the source to adjust its congestion window. by dropping one of its packets (implicit notification) The source is effectively notified by the subsequent timeout or duplicate ACK. Early Drop The router drops a few packets before it has exhausted its buffer space completely to inform the source host, with the hope that it does not have to drop lots of packets later on. RED Policy on when to drop a packet and what packet it decides to drop. 58 Chapter 6 RED Average queue length (with EWMA) 1 2 3 where and is the length of the queue when a sample is made. Two Thresholds MinThreshold ( ), MaxThreshold ( ). When a packet arrives at the gateway, RED compares the current with these two thresholds, according to the following rules: if queue the packet if drop the arriving packet with probability if drop the arriving packet 59 Chapter 6 RED, Drop Probability The drop probability is a function of both and how 1 2 3 long it has been since the last packet was dropped. compute the probability as follows: × the “count” keeps track of how many newly arriving packets have been queued (not dropped), the more packets queued, the higher probability 60 Chapter 6 RED 1 2 3 RED thresholds on a FIFO queue 61 Chapter 6 Source-based Congestion Avoidance The source node might notice that 1 2 3 the RTT increases possibly because the packet queues of the routers build up Method #1 If the current RTT is greater than the average RTT, decreases the congestion window by one-eighth Method #2 Consider both the RTT and window size. = (CurrentWindow − OldWindow)×(CurrentRTT − OldRTT) If is positive, the source decreases the window size by one-eighth; if is negative or 0, the source increases the window by one maximum packet size. 62 Chapter 6 1 2 3 QUALITY OF SERVICE 63 Chapter 6 Quality of Service For many years, packet-switched networks have offered the 1 2 3 promise of supporting multimedia applications, that is, those that combine audio, video, and data. After all, once digitized, audio and video information become like any other form of data—a stream of bits to be transmitted. One obstacle to the fulfillment of this promise has been the need for higher-bandwidth links. Recently, however, improvements in coding have reduced the bandwidth needs of audio and video applications, while at the same time link speeds have increased. 64 Chapter 6 Quality of Service Multimedia service requires both 1 2 3 providing sufficient bandwidth, and timeliness of delivery real-time applications generally, applications that are sensitive to the timeliness of data need some assurance that data is likely to arrive “on time” eg) voice and video applications, and industrial control—remote robot control non-real-time application can use an end-to-end retransmission strategy to make sure that data arrives correctly no requirement on “timeliness”. Conclusion the network must treat some packets differently from others simple best-effort is not enough. “quality of service (QoS)” means to support the different levels of service 65 Chapter 6 Real-Time Applications Cyles 1 2 3 Generating - voice samples were collected at a rate of one per 125 s Transmitting Playing back – must be played back at a rate of one per 125 s Interval between samples Each sample has a 125-s playback time later than the preceding sample If data arrives after its appropriate playback time, it is useless Delay between source and destination For some audio applications, there are limits to how far can it be delayed for an “interactive” application eg) 300 ms for voice conversation 66 Chapter 6 Generation vs. Playback 1 2 3 A playback buffer 67 Chapter 6 Taxonomy of Real-Time Applications tolerance of data loss 1 2 3 “loss” might occur because of too late to be played back or because of the usual causes in the network. interpolation one lost audio sample can be interpolated from the surrounding samples more and more loss results in incomprehensible speech. intolerable data loss some real-time application cannot tolerate loss—losing the packet is not acceptable. eg) command instructing the robot arm adaptability. delay adaptive eg) An audio application might be able to adapt to the amount of delay. Earlier packet can be buffered. eg) for a consistent delay, the playback point can be adjusted. rate adaptive. eg) video coding algorithms can trade off bit rate versus quality. 68 Chapter 6 Taxonomy of Real-Time Applications, 2 1 2 3 69 Chapter 6 Quality of Service, Approaches Fine-grained approaches 1 2 3 to provide QoS to individual applications or flows eg) “Integrated Services” associated with RSVP (Resource Reservation Protocol), developed in IETF. Coarse-grained approaches to provide QoS to large classes of data or aggregated traffic eg) “Differentiated Services”, the most widely deployed QoS mechanism. 70 Chapter 6 QoS, Integrated Services Produced by the IETF around 1995–97. 1 2 3 Service Classes Guaranteed Service The network should guarantee that the maximum delay that any packet will experience has some specified value Controlled Load Service The aim of the controlled load service is to emulate a lightly loaded network for those applications that request the service, even though the network as a whole may in fact be heavily loaded RSVP A protocol implementing reservations using the IntServ. Mechanisms Flowspec Qualitative and quantitative information given to the network Admission Control Upon receiving a request for a particular service, the network decides if it can provide that service. The admission control is such a process of deciding. Resource Reservation A process of exchanging information such as requests for service, flowspecs, and admission control decisions between the network and the users. Packet Scheduling The network switches and routers need to meet the requirements of the flows. To meet these requirements. the packet scheduling is managing the way packets are queued and scheduled for transmission in the switches and routers. 71 Chapter 6 QoS, Flowspec Two separable flowspec: 1 2 3 Tspec -- flow’s traffic characteristics eg) Bandwidth. For most applications, the bandwidth is not a single number. It varies constantly. A video application will generate more bits per second when the scene is changing rapidly than when it is still. Just knowing the long term average bandwidth is not enough. Suppose 10 flows arrive at a switch on separate ports and they all leave on the same 10 Mbps link. If each flow is expected to send no more than 1 Mbps, then no problem. If these are variable bit applications such as compressed video, they will occasionally send more than the average rate If enough sources send more than average rates, then the total rate at which data arrives at the switch will be more than 10 Mbps This excess data will be queued The longer the condition persists, the longer the queue will get. Rspec - the service requested from the network very service specific eg) With a guaranteed service, you could specify a delay target or bound. 72 Chapter 6 QoS, Token Bucket Filter One way to describe the Bandwidth characteristics of 1 2 3 sources. The filter is described by two parameters A token rate A bucket depth To be able to send a byte, a token is needed To send a packet of length , tokens are needed Initially there are no tokens. Tokens are accumulated at a rate of per second No more than tokens can be accumulated Flow Control We can send a burst of as many as bytes into the network as fast as we want, but over significant long interval we cannot send more than bytes per second 73 Chapter 6 QoS, Token Bucket Filter, 2 To characterize a flow’s Bandwidth requirement 1 2 3 For simplicity, we assume each flow can send data as individual bytes rather than as packets Steady Flow Flow A generates data at a steady rate of 1 MBps. So it can be described by a token bucket filter with a rate = 1 MBps and a bucket depth of 1 byte It receives tokens at a rate of 1 MBps but it cannot store more than 1 token, it spends them immediately Abrupt Flow Flow B sends at a rate that averages out to 1 MBps over the long term, but does so by sending at 0.5 MBps for 2 seconds and then at 2 MBps for 1 second. Since the token bucket rate is a long term average rate, flow B can be described by a token bucket with a rate of 1 MBps Unlike flow A, however, flow B needs a bucket depth of at least 1 MB, so that it can store up tokens while it sends at less than 1 MBps to be used when it sends at 2 MBps For the first 2 seconds, it receives tokens at a rate of 1 MBps but spends them at only 0.5 MBps, So it can save up 2  0.5 = 1 MB of tokens which it spends at the 3rd second. 74 Chapter 6 QoS, Admission Control When some new flow wants to receive a particular level of 1 2 3 service, admission control looks at the TSpec and RSpec of the flow and tries to decide if the desired service can be provided to that amount of traffic, given the currently available resources, without causing any previously admitted flow to receive worse service than it had requested. If it can provide the service, the flow is admitted; if not, then it is denied. 75 Chapter 6 QoS, Resource Reservation Protocol (RSVP) To reserve resources across a network using the IntServ model 1 2 3 Assumptions To maintain the robustness of the IP. soft-state: The network could be up and down, but RSVP tries to maintain this robustness by using the “soft state” in the routers. To support multicast flows while maintaining the quality receiver-oriented: the receivers keep track of their own needs, but not the sender. Procedure The receiver needs to know the sender’s TSpec. by sending a message from the sender to the receiver that contains the TSpec. It needs to know the path from sender to receiver, so that it can establish a resource reservation at each router on the path. Each router looks at this message (called a PATH message) as it goes past It figures out the reverse path that will be used to send reservations from the receiver back to the sender Having received a PATH message, the receiver sends a reservation back “up” the multicast tree in a RESV message. This message contains the sender’s TSpec and an RSpec describing the requirements of this receiver. Each router on the path looks at the reservation request and tries to allocate the necessary resources to satisfy it. If the reservation can be made, the RESV request is passed on to the next router. If not, an error message is returned to the receiver who made the request. If all goes well, the correct reservation is installed at every router between the sender and the receiver. As long as the receiver wants to retain the reservation, it sends the same RESV message about once every 30 seconds. 76 Chapter 6 QoS, Reservation PATH 1 2 3 Making reservations on a multicast tree 77 Chapter 6 QoS, Packet Classifying and Scheduling Once completing a reservation on the path, 1 2 3 The routers are to actually deliver the requested service to the data packets. classifying packets : to associate each packet with the appropriate reservation. packet scheduling : to manage the packets in the queues 78 Chapter 6 QoS, Differentiated Services (DiffServ) For classifying and managing network traffic and providing 1 2 3 quality of service (QoS) on IP networks Simple and small number of classes of traffic critical traffic (premium service) non-critical traffic (regular service) Use of IP header TOS field Questions: Who sets the premium bit, and under what circumstances? What does a router do differently when it sees a packet with the bit set? 79 Chapter 6 DiffServ Who sets the premium bit ? 1 2 3 to set the bit at an administrative boundary. eg) the router at the edge of an ISP’s network might set the bit for packets arriving on an interface that connects to a particular company’s network. What does a router do? “per-hop behaviors” (PHBs) a set of router behaviors to be applied to marked packets (standardized by the IETF) Expedited Forwarding (EF) PHB Packets marked for EF treatment should be forwarded by the router with minimal delay and loss. The arrival rate of EF packets at the router is strictly limited to be less than the rate at which the router can forward EF packets (to avoid over-use or mis-use) Assured Forwarding (AF) PHB Enhancements to the basic RED algorithm. Two separate drop probability curves. “out” curve has a lower MinThreshold than the “in” curve under low levels of congestion, only packets marked “out” will be discarded by the RED algorithm. If the congestion becomes more serious, a higher percentage of “out” packets are dropped If the average queue length exceeds Minin, RED starts to drop “in” packets as well. 80 Chapter 6 DiffServ, Assured Forwarding (AF) PHB 1 2 3 RED with In and Out drop probabilities 81 Chapter 6 Summary We have discussed different resource allocation mechanisms 1 2 3 We have discussed different queuing mechanisms We have discussed TCP Congestion Control mechanisms We have discussed Congestion Avoidance mechanisms We have discussed Quality of Services 82

Use Quizgecko on...
Browser
Browser