Resumen Control 1 PDF

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Summary

This document provides an introduction to transport networks and details the evolution of networks, including PCM and TDM techniques. The document covers requirements like scalability, multi-service, quality, cost-efficiency, and related concepts. It's suitable for undergraduate students or professionals in the field.

Full Transcript

INTRODUCTION TO TRANSPORT NETWORK 45 TRANSPORT NETWORK: Reliable aggregation and transport of any client traffic type, in any scale, at lowest cost per bit. Requirements: - Scalability: ability to support any number of client traffic instances whatever...

INTRODUCTION TO TRANSPORT NETWORK 45 TRANSPORT NETWORK: Reliable aggregation and transport of any client traffic type, in any scale, at lowest cost per bit. Requirements: - Scalability: ability to support any number of client traffic instances whatever network size, from access to core (Layering, Partitioning) - Multi-service: ability to deliver any type of client traffic (transparency to service) - Quality: ability to ensure that client traffic is reliably delivered at monitored e2e performance (connection oriented, strong OAM, traffic engineering, resource reservation). - Cost-Efficiency: keeping processing complexity as low as possible and operations easy. (CAPEX, low protocol complexity and OPEX, unified management and control access) o CAPEX: capital expenditures creating future benefits (equipment, property, industrial buildings, networking software…) o OPEX: operating expenditures (operation costs, license fees, maintenance and repairs, insurance, advertising, supplies, utilities…) EVOLUTION OF NETWORKS - First digital transmission 1962 (Bell Labs): use of Pulse Code Modulation (PCM, patented in france in 1938) and Time Division Multiplex (TDM) techniques. o PCM goal was convert analogue voice telephone channel into a digital one, band-limited signals in the range of 300-3400Hz. Involves 3 phases: sampling (Nyquist), quantization and encoding. o BW = 4 KHz -> fs = 8 KHz. 8 bits to codify each sample. T = 1/125us. Bitrate obtained: 64Kbps o To reduce number of physical connections, communications carriers employ multiplexing. To enable multiple simultaneous voice conversations to be routed onto a common circuit -> TDM Technique o PCM/TDM allows several digital voice channels to be transmitted over the same media on a time divided basis. The multiplexer assembles a full character from each data source into a frame for transmission, byte-by-byte or channel-by-channel multiplexing. Slotted medium of 64Kbps/chnl. The resulting channel is the 1st level carrier: E-carrier (EUR) or T-carrier (USA) - Whichever transmission technology is used to transport voice channels, it must be able to transport one sample of each channel every 125us. The signals of all lOMoAR cPSD| 9558445 hierarchical levels of PDH and SDH are organized in frames of the same duration (125us), independently of the number of transported channels. In this way, each byte is a specified position within the frame can carry one telephone channel or equivalently a digital channel of capacity 64kbps. PDH HIERARCHY To transport more than 30 channels, the ITU-T defined the hierarchy PDH: the higher hierarchical levels are obtained by multiplexing N lower levels frames within a frame whose nominal transmission rate is more than N times that of the lower level. The level 1 of the hierarchy is done Byte by Byte, but the higher levels are carried out bit-by-bit. Based on a plesiochronous (means almost synchronous) operation mode, with no synchronization network and clocks whose values fluctuates around a nominal frequency value (synchronism: setting up a common block between origin and reception nodes in order to correctly recover transmitted signals) - Higher levels multiplexers receives frames from lower level multiplexers with clocks whose value fluctuates around a nominal frequency value within certain margins of tolerances. The margins are set by the ITU-T recommendations for each hierarchichal level. The justification mechanism is needs for dealing with these fluctuations. Limitations of Plesiochronous Networks Plesiochronous ANSI and ETSI hierarchies were not compatible. There were no standards defined for rates over 45Mbps in T-carrier (USA) and over 140Mbps in Europe PDH. Their management, supervision and maintenance capabilities are limited, as there are no overhead bytes to support these functions. Different manufacturers of plesiochronous equipment could not always be interconnected, because they implemented additional management channels or proprietary bits rates, so in PDH it was not possible to create higher bits rates directly. Access to 64Kbps digital channels from higher PDH hierarchical signals requires full de- multiplexing: the use of bit-oriented procedures removes any trace of the channels and a lot of multiplexers and de-multiplexers are needed. These limitations means that it was necessary to design a new transmission architecture to increase flexibility, functionality, reliability and interoperability of networks->SDH. SDH INTRODUCTION The Synchronous Digital Hierarchy (SDH) is an ITU-T universal standard which defines a common INTRODUCTION TO TRANSPORT NETWORK 45 and flexible communication architecture to transport telecommunication services. - SONET (Synch. Optical Network) is today a subset of SDH, promoted by ANSI and developed at Bellcore Labs in 1985. In 1988, broadband-ISDN was created for transport simultaneously data, voice, video and multimedia over common transmission infrastructures. ATM was selected for the switching layer, and SDH for transport at the physical layer. SDH was first introduced into telecom. networks in 1992. It’s based on overlaying a synchronous multiplexed signal onto a light steam transmitted over fibre-optic cable and also for use on radio relay links, satellite links and electrical interfaces between equipment. SDH advantages: - Provides the definition of a flexible architecture capable of accommodating future applications with a variety of transmission rates (flexible transport of pleosynch and synch signals) - Enable transmission over multiple media, so they allow internetworking between different manufacturers by means of a set of generic standards and open interfaces. - Basic operations as multiplexing, mapping or alignment are byte oriented, to keep transported elements identifies through the whole transmission path. - All the nodes must be fed with the same master clock: synchronisation enables us to insert and extract tributaries directly at any point and at any bit rate, without delay or extra hardware. - Provides an overhead permitting operation, administration and maintenance (OAM) functions, which are essential to enable a centralized management. - Provides mechanisms to protect the network against link or node failures, to monitor network performance and to manage network events (SDH Resilience). - Scalability: transmission of rates of up to 40Gbps, making SDH a suitable technology for high-speed trunk networks. Some SDH Features SDH Network Elements - Regenerators: Se sincronizan usando la señal recibida y remplazan los bytes RSOH antes de retransmitir la señal. MSOH, POH y la carga útil no se altera. - Multiplexers para insentrar y extraer datos en muestras sincronizadas o LTM: combina plesióncronos y señales de entrada síncrona en una señal STM-N de mayor velocidad de bits. o LM: Señales síncronas multiplexadas/demultiplexadas STM-N dentro de STM-M (M>N). o ADM: Señales plesincronas y síncronas de bajo bit rate se pueden extraer o insertar en el bit SDH de alta velocidad de flujo. - Digital Cross-Connects (DXC): Permitir el mapeo de las señales tributarias de la PDH en contenedores virtuales, así como la conmutación de varios contenedores. El tráfico conmutado puede ser tanto corrientes SDH como afluentes seleccionados. Topología - Ring: se utilizan con frecuencia para construir arquitecturas de tolerancia a las fallas. La ventaja principal es su capacidad de supervivencia. Hierarchical Master-Slave Synchronization - A master clock synchronizes the slaves clocks, directly or indirectly according to a tree topology, and these are organized in two or more hierarchical levels. lOMoAR cPSD| 9558445 - Protection mechanism against link and clock failures are planned through alternate synchronization routes, not only between parent and son clocks, but also between brothers, or even uncle and nephew. Elements of this architecture - Primary Reference Clock (PRC). Represents an autonomous clock or a clock that accepts reference synchronization from radio or satellite (GPS or Loran-C) Caesium atomic clock. - Slave clocks: SSU (Synchronization supply unit) o BITS (Building Integrated Timing Supply). There are transit and local nodes: SSU-T and SSU-L - Clocks at NE: SEC (SDH Equipment Clock) - Synchronous Ethernet has the same synchronization network model than SDH. - Higher bit rates over STM-1: Through the hierarchy. - Contiguous concatenation: Allows bit rates in excess of the capacity of the containers. The payload can be distributed to several containers. Creates big containers that cannot split into smaller pieces during transmission. o VC-4-4c (599.040 Mbps), VC-4-16c (2,396.160 Mbps), VC-4-64c (9,584.640 Mbps), VC-4-256c (38,338.560 Mbps). SDH Limitations - Transport different signal rates: other bit rates different to given by virtual containers (VC-x and contiguous concatenation) are not possible. INTRODUCTION TO TRANSPORT NETWORK 45 - The necessity to find one simple encapsulation method that was capable of accommodating any data packet protocols. - Bandwidth efficiency: the need to use bandwidth accurately Next generation SDH (NGN-SDH) - Virtual Concatenation (VCAT) to provide more flexibility in matching the bandwidth of the client signal. o Resolves the granularity problem of SDH adapting transmission speed to user requirements by using virtual concatenation. ▪ User data mapped to groups of virtual containers. Inverse multiplexing (G.805) o Optimizes the use of SDH network ▪ Virtual Concatenation offers the user a granular bandwidth choice, optimizing the use of network resources (better efficiency of the SDH network) o Transparency in the SDH network ▪ Individual VC are beared as traditional virtual containers. ▪ Core nodes are transparent to VCAT. ▪ End nodes must support VCAT functionalities. ▪ Receiver node reassembles the user frame and must compensate the delay differences of each path. Delay correction has a maximum limit of 512 ms. Suitable for continental networks (in fact, 100 ms is more than sufficient). - Link Capacity Adjustment Scheme (LCAS) to provide the ability to change the size of a VCAT signal. o G.7042. Provides soft protection and a mechanism for load sharing. Is an extension of virtual concatenation o Designed to manage the bandwidth allocation of a VCAT path. LCAS can add and remove members of a VCG that control a VCAT channel. Provides the ability to change the size of a VCAT signal. ▪ Dynamic bandwidth Allows bandwidth changes during the service BW can be managed adding or dropping VC of VCG Asymmetric Configurations. o Protection and failure tolerance ▪ Increases availability of VC from failures or changes ▪ Automatically decreases link capacity if a VC path has a failure, increasing when repaired. - Generic Framing Procedure (GFP) used for mapping packet-based signals into the constant bit-rate SDH signals. lOMoAR cPSD| 9558445 G.709 OPTICAL TRANSPORT NETWORK - OTN technology enables multiple networks and services, such as legacy SONET/SDH, to be combined seamlessly into a common infrastructure for data, voice, video and storage applications -OTN or “digital wrapper” was created to combine the benefits of SDH/SONET technology with the bandwidth expansion capabilities offered by DWDM NETWORK EVOLUTION For decades, large high-speed backbone networks have been circuit-switched networks, based on TDM running SDH/SONET. Around 2012 - TDM transitioning to Ethernet. The primary traffic type is bursty - Peer-to-peer computing and cloud computing as the new models - Internet-based video applications putting demand on bandwidth - The explosion of BW demand -> entering the ZettaByte (ExaByte = 109 GB, ZettaByte = 1000 EB) - With the escalation in demand for video, data and other bandwidthhungry applications, the limitations of circuit-based systems have become readily apparent. o Underutilized or “wasted” bandwidth using TDM o TDM is expensive to operate o SDH capped at 40 Gbps. Evolution to packet switching driven by: - Growth in packet-based services (L2/L3 VPN, IPTV, VoIP, etc.) - Desire for bandwidth/QoS flexibility New packet transport networks need to retain same operational model An MPLS (Multi Protocol Label Switching) transport profile being defined at IETF (in collaboration with ITU-T) IP/MPLS has the flexibility and scalability to use that capacity most efficiently. MPLS-enabled core networks brings packet-awareness - MPLS is a method for providing connection oriented services for variable length frames without regard to their type: whether they be IP packets or native ATM, SDH or Ethernet frames. TECHNOLOGY FOR PACKETIZED NETWORKS - PSEUDOWIRE (PW): A mechanism that emulates the essential attributes of a native service while transporting over a packet switched network (PSN). Supports many L1&L2 services over a common packet switched network infrastructure. - PW+IETF -> MPLS/VPLS, emulates LAN over an MPLS network. - PW+IEEE -> Provider Backbone Transport (PBT). Tunnels interconnect Ethernet access clouds. - PW+ITU-T -> Transport MPLS (T-MPLS) - MPLS Transport-Profile (MPLS-TP): ITU-T and IETF convergence. INTRODUCTION TO TRANSPORT NETWORK 45 PACKET TRANSPORT CHARACTHERISTICS MPLS vs MPLS-TP EVOLUTION IN THE CLOUD ERA REQUIREMENTS IN THE CLOUD ERA The migration to cloud is leading to massive changes in how communications networks are built and operated. We will need: - Capacity scale - Networks and services agility (rapid reconfiguration and automation) - Openness (interoperability across domains, layers and vendors). The software-centric approach deploys faster, scales immediately, reduces capital expenditure, and simplifies network operations. lOMoAR cPSD| 9558445 So, they appear new technologies: Software-Defined Networking (SDN) and Network Functions Virtualization (NFV). - Virtualization as enabler - Use of COTS (commercial off the shelf) reducing CAPEX - Flexible, automated and programable networks to reduce OPEX and delivering services faster to market (zero-touch management, Artificial Intelligence as a selflearning algorithms to adapt and execute in specific network operating situations dynamically). - Network slicing: Using SDN and NFV it will be possible to configure the type of network that is required by each type of service, over the same hardware with different software. SEGMENT ROUTING WITH SDN NEW NEED FOR WAN CONNECTIONS Enterprises built private data centers and then connected these assets with private WAN links, using legacy protocols such as MPLS. While MPLS is built to secure these p2p links between enterprise and data centers, it’s not designed to connect to the public cloud. The new trend if cloud and cloud connectivity: - The volume of enterprise traffic is growing rapidly across all global regions and verticals. - 50% of enterprises traffic is HTTP and HTTPS. Applications are moving from onpremises to the cloud. - Access-site BW is reasonably good worldwide. - Enterprises have the challenge of connecting long distance across the globe, having latency problems with TCP application response times, and with great variations with some geographies. - Enterprises try to keep up with the scale and dynamic nature of cloud connectivity, including connecting mobile users across the globe (dynamic SD-WAN, software and control provisioning) SD-WAN A Software-Defined Wide Area Network (SD-WAN) is a virtual WAN architecture that allows enterprises to leverage any combination of transport services (including MPLS, LTE and broadband internet services) to securely connect users to applications. MEF published in Ag. 2019 the SD-WAN Service Attributes and Services (MEF 70) standard. SD- WAN for 5G (Mapping SD-WAN application performance and security to 5G slices). INTRODUCTION TO TRANSPORT NETWORK 45 Key advantages include: - Reducing costs with transport independence across MPLS, 4G/5G LTE and other types of connections. Transmission is adapted to the most suitable service. - Reliability - Security - Improved OPEX, replacing MPLS services with more economical and flexible broadband (including secure VPN connections). - Improve application performance and increasing agility (dynamic route application traffic for efficient delivery and improved user experience). - Optimize user experience and efficiency for Software-as-a-Service(SaaS) and publiccloud applications. - Simplify operations with automation and cloud-based management. SD-WAN combines all data transport types into a single overlay connection that is intelligently managed to route network traffic over optimal physical network connections (fiber, DOCSIS, MPLS, 4G, 5G, etc.) SD-WAN can automate much of the complexity of tunnel monitoring, redundancy, and fail- over configurations. It is possible to maintain MPLS tunnels (Hybrid SD-WAN). STILL MPLS THE EVOLUTION FROM CIRCUITS TO PACKETS: MPLS I MPLS Introduction IP - The first protocol used and defined in Internet. - The only protocol for global communication in Internet. IP Cons - Not connection oriented: without QoS - Each router takes independent routing decisions based on the IP address - Large IP headers: minimum 20 Bytes - Routing defined at network level: Slower than level 2 switching - Normally the routing is designed to obtain the shortest path: Does not take other metrics into account ATM - Connection oriented: Supports QoS - Fast packet switching, these being of fixed length (cells) - Integration of different traffic types (voice, data, video) ATM Cons - Complex - Costly - Not amply adapted: ATM Forum was created to define interoperability standards in ATM systems to facilitate and extend their use. It was responsible for development of a wide range of ATM standards. Evolution In the mid 90´s the need to combine IP with ATM emerged: IP Switching (Ipsilon), Tag Switching (CISCO), Aggregate Routed-based IP Switching (IBM), etc. Challenge of telecommunication operators - Scalability in the Internet backbone. - Maintain multiple parallel network for different services is increasingly costly. - Need to provide new services over existing infrastructure - Need to give Quality of Service by application (QoS). Solution: - MultiProtocol Label Switching - First version by IETF in 2001. - Routing protocol based on labels (Label Switching). - Coexists with IP and is a better alternative than ATM. Why Multiprotocol? - With the label being independent of level 2 and level 3, it can transport ATM cells or IP packets or any other type of frame or packet. - The MPLS switching algorithm does not depend on the transported protocol. Idea: Combine the forwarding algorithm used in ATM with IP. - Defined between levels 2 and 3. Operates at layer 2.5 - MPLS is a hybrid model adopted by the IETF to incorporate the best characteristics of packet switching and circuit switching. THE EVOLUTION FROM CIRCUITS TO PACKETS: MPLS I MPLS benefits Have level 2 switching speed in level 3 - Provide a forwarding mechanism independent of routing tables - The routers make decisions to forward packets based on the content of a label, instead of doing a complex search based on the destination IP address - Assigns labels based on the concept of Forwarding Equivalent Classes (FEC): The considerations that determine how a packet is assigned to a FEC can become ever more and more complicated, without any impact at all on the routers that merely forward labeled packets. - Packet payloads are not examined by the forwarding routers, allowing for different levels of traffic encryption and the transport of multiple protocols. - Eliminate multiple layers: Use ATM in level 2 and IP in level 3 implicates to have an overlay model in which it is necessary to define the interrelation between both. o Resolves the problems of IP over ATM: Control and management complexity and Scalability. - Separation between routing and forwarding: Allows for development of new routing protocols without changing the forwarding techniques in each Internet router. - Facilitates the division of functionalities in the network: Processing of packets in the edges; forwarding in core nodes. Scalability. - Improves the routing scalability through the stacking of labels: Complete routing tables in the interior routers of a transit domain not necessary, only paths to the end routers needed. - Offers mechanisms to handle data flows of different granularities. - Packets can be assigned with a priority label, making Frame Relay and ATM-like quality- of-service guarantees possible o This function relates to the CoS field. - A packet can be forced to follow an explicit route rather than the route chosen by normal dynamic algorithm as the packet travels through the network o This may be done to support traffic engineering (TE, MPLS-TE), as a matter of policy or to support a given QoS. Elements of MPLS technology Forwarding Equivalent Classes (FEC) Defined in RFC 3031 Is a representation of a group of packets that share the same transport requirements. - Traffic policy that examines and classifies the traffic according to a set of conditions or attributes. - Possible classification attributes: o Combination source and destination subnetwork. o Combination destination subnetwork and application type. o Source or destination port number. - The assignment of a packet to a certain FEC is done just once when the packet enters the network. o Packets that belong to the same FEC have the same forwarding treatment. Label characteristics Consists of 20 bits that are used to identify a certain FEC. THE EVOLUTION FROM CIRCUITS TO PACKETS: MPLS I The MPLS header is inserted between the level 2 and level 3 headers. TC (Traffic Class) (3 bits) (previously known as EXP (Experimental)). To define COS that can influence in the discarding algorithms in the queues applied to the packets that pass through the network. S (Stack bit). Indicates the presence of a label stack. If it is 1 this indicates that it is the last label on the stack. Time to Live. Decreases each time that it passes through a MPLS node and is used to send the packets (avoid loops). Label stacking - Allows for hierarchical operation in MPLS - Facilitates the operation mode with MPLS tunnels. Only the routers at the ingress and the egress of a tunnel need to understand the underlying traffic carried over a tunnel. - Last-In First-Out buffer of labels placed by applications such as TE, VLL, VPLS, VPN. Packets that pass through one or various MPLS networks can be labelled various times as there exists a MPLS hierarchy. This can be given in different circumstances: - Network scalability hierarchically based and therefore limiting the number of LSPs (Label-Switched Paths) between routers, being these aggregation of labels. - Another application for label stacking is in the creation of MPLS VPN. MPLS Routers: Label Switch Routers Label Switch Router (LSR) refers to any router that has awareness of MPLS labels Edge LSR, Label Edge Router – LER or PE (Provider Edge router) - Found in the edges of an MPLS network. Assigns and deletes packet labels. o When the last label is deleted, the forwarding is based on the level 3 address (at the egress LER). o Ingress LER: Head-end LSR; Egress LER: Tail-end LSR. - Supports multiple ports connected to different networks (like for example Frame Relay, ATM and Ethernet). Core Label Switching Router – Core LSR or LSR or P (Provider router) - High speed routers in the core of an MPLS network. - Use the highest stacked label to determine: Next hop and The operation to be realized with the label: swap, pop, push. Position of routers in the network and MPLS Label Operations THE EVOLUTION FROM CIRCUITS TO PACKETS: MPLS I MPLS architecture: Control Plane The CONTROL PLANE takes care of the routing information exchange and the label exchange between adjacent devices. The control plane builds a routing table (Routing Information Base [RIB]) based on the routing protocol. Various routing protocols, such OSPF, IGRP, EIGRP, IS-IS, RIP, and BGP, can be used in the control plane for managing L3 routing. Routing Protocol -> RIB The control plane uses a label exchange protocol to create and maintain labels internally, and to exchange these labels with other devices. The label exchange protocol binds labels to networks learned via a routing protocol and these labels are maintained in a LIB (Label Information Base) table in the control plane. Label exchange protocols include MPLS LDP, the older Cisco Tag Distribution Protocol (TDP), and BGP (used by MPLS VPN). Resource Reservation Protocol (RSVP) is used by MPLS TE to accomplish label exchange. Label exchange protocol -> LIB The control plane also builds two forwarding tables, a FIB from the information in the RIB, and a Label Forwarding Information Base (LFIB) table based on the label exchange protocol and the RIB. In most cases, the ingress router uses the FIB table for incoming packets, matching the destination IP to the best prefix (network) and assigning a label before forwarding it. The LFIB table (core LSR) includes label values and associations with the outgoing interface for every network prefix. MPLS architecture: Data Plane The DATA PLANE takes care of forwarding based on either destination addresses or labels; the data plane is also known as the forwarding plane. The data plane is a simple forwarding engine that is independent of the type of routing protocol or label exchange protocol being used. The data plane forwards packets to the appropriate interface based on the information in the LFIB or the FIB tables. MPLS Operation MPLS key points - Assignment of a particular packet to a particular FEC is done just once, as the packet enters the network. - Packets are “labeled” before they are forwarded to the next hop. - All forwarding is driven by labels. - No further analysis of the packet’s network layer header at subsequent hops. - Label is used as an index into a table which specifies the next hop and a new label. The old label is swapped with the new label and the packet is forwarded to its next hop. THE EVOLUTION FROM CIRCUITS TO PACKETS: MPLS I The ingress LER identifies the egress LER to which the packet must be sent and the corresponding LSP (Label Switched Path). The label value used corresponds to the LSP. - How does the ingress LER know the label value(s) to use? It is learnt through either the LDP (Label Distribution Protocol) or the RSVP-TE signaling protocols. - The ingress LER inserts an MPLS header into a packet before the packet is forwarded. Label in the MPLS header encodes the packet’s FEC. At subsequent LSRs - The label is used as an index into a label forwarding table (LFIB) that specifies the next hop and a new label. - The old label is replaced with the new label, and the packet is forwarded to the next hop. Egress LSR strips the label and forwards the packet to final destination based on the IP packet header. Label-Switched Paths - LSPs Thus, for each FEC, a specific path called Label Switched Path (LSP) is assigned - The LSP is unidirectional. The traffic in the opposite direction must take another LSP. To set up an LSP, each LSR must - Assign an incoming label to the LSP for the corresponding FEC: Labels have only local significance. - Inform the upstream node of the assigned label. - Learn the label that the downstream node has assigned to the LSP. Thus, a label distribution protocol is need so that an LSR can inform others of the label/FEC bindings it has made. A label forwarding table is constructed as the result of label distribution. MPLS provides two options to provide an LSP: - Hop to hop routing: Each LSR selects the next hop independent for each FEC. LSR supports any routing protocol (OSPF, etc.). - Explicit routing: Similar to source-routing. The ingress LSR specifies the list of nodes that the packet needs to pass through. o Advantages: ▪ Operator has routing flexibility (policy-based, QoS-based). ▪ Can use routes other than shortest path. ▪ Can compute routes based on constraints on distributed topology database (traffic engineering). Label exchange Use of a signalling protocol - IETF has not standardized one specific protocol. - LDP (Label Distribution protocol): Creates Best Effort LSPs (BE). Does not support TE (Traffic Engineering). - RSVP-TE: Permits reservation of channels for large bandwidth transmissions. o Originally for resource reservation, extended to propagate labels. Supports TE. - CR-LDP (Constraint-based routing of LSPs): LDP extended to support constraints (TE). - Label Distribution Protocol - LDP An application level protocol for the distribution of label relations in the LSR (RFC 3036). - Used to bind FECs to labels, by which LSPs are created. Forwarding tables are built, mapping the incoming label and the outgoing label. THE EVOLUTION FROM CIRCUITS TO PACKETS: MPLS I - LDP sessions are established between LDP pairs in the MPLS network (not necessarily adjacent) (LDP peers). - LDP-established LSPs always follow the IGP shortest patch (e.g. OSPF) o The scope of these LSPs is limited to the scope of the IGP. Cannot traverse autonomous systems boundaries. o A synchronization between IGP and LDP is needed LDP message types: - Discovery messages: announce and maintain the presence of a LSR in a network (Hello packets) - Session messages: establish, maintain and end sessions between LDP pairs. - Advertisement messages: create, change and delete maps between labels and FECs. - Notification messages: provide information of signal errors. LDP Operation Label distribution ensures that adjacent routers have a common view of FEC label bindings LDP Label Distribution Modes The Label Distribution Protocol (LDP) is used to establish MPLS transport LSPs when traffic engineering is not required. It establishes LSPs that follow the existing IP routing table, and is particularly well suited for establishing a full mesh of LSPs between all of the routers on the network. LDP can operate in many modes to suit different requirements; however the most common usage is unsolicited mode, which sets up a full mesh of tunnels between routers. - In solicited mode, the ingress router sends an LDP label request to the next hop router, according its IP routing table. This request is forwarded on through the network hop-by- hop by each router. Once the request reaches the egress router, a return message is generated. This message confirms the LSP and tells each neighbor router the label mapping to use on each link for that LSP. THE EVOLUTION FROM CIRCUITS TO PACKETS: MPLS I - In unsolicited mode, the egress routers broadcast label mappings for each external link to all of their neighbors. These broadcasts are fanned across every link through the network until they reach the ingress routers. Across each hop, they inform the upstream router of the label. This mode of label distribution allows an LSR to distribute bindings to upstream LSRs that have not explicitly requested them. Avoids black-holed traffic until the LSP is requested. Two modes of label distribution between R1 (Edge LSR) and R2 (LSR). In the downstream on demand distribution process, LSR R2 requests a label for the destination 172.16.10.0. R1 replies with a label mapping of label 17 for 172.16.10.0. In the unsolicited downstream distribution process, R1 does not wait for a request for a label mapping for prefix 172.16.10.0 but sends the label mapping information to the upstream LSR R2. Label Retention Modes More MPLS Features Penultimate Hop Popping (PHP) - PHP is used by MPLS Edge Routers to reduce the load of two lookups (MPLS lookup and IP lookup). - The label between the penultimate LSR and the egress LSR is called the Explicit Null Label (with a value 0). o Explicit NULL could be used in environments where you want to use MPLS QoS values that are different from IP DSCP/IP Precedence values. - The implicit null label allows an egress LSR to receive MPLS packets from the previous hop without the outer LSP label. o The egress router has to signal to its upstream neighbor (the penultimate router) that it should NOT swap the label, but it has to be popped. For this, it uses "implicit null" label (= 3) in TDP/LDP updates to signal that the top label should be popped, not rewritten. THE EVOLUTION FROM CIRCUITS TO PACKETS: MPLS I LSP Merge - Merging two LSPs (going to the same destination) reduces the number of labels being used in the network. However it makes it impossible to differentiate between traffic common from two different sources before the merging happened. - Application: protection Equal Cost Multi Path (ECMP) - As a means of potentially reducing delay and congestion, IP networks have taken advantage of multiple paths through a network by splitting traffic flows across those paths. Problems about flows arriving with jitter and out of order. None of this is in violation of the basic service offering of IP, but it is detrimental to the performance of various classes of applications. It also complicates the measurement, monitoring, and tracing of those flows. THE EVOLUTION FROM CIRCUITS TO PACKETS: MPLS II MPLS with Traffic Engineering (MPLS-TE) recovery mechanisms TE Objectives “A major goal of Internet Traffic Engineering is to facilitate efficient and reliable network operations while simultaneously optimizing network resource utilization and performance.” RFC 2702, Requirements for Traffic Engineering over MPLS Defines how the flows should be routed to use the network resources efficiently (to avoid congestion), including reliability. - BW Optimization (traffic load balance, better network resource utilization). - Strict QoS guarantee. - Fast recovery in case of fault. Classic fish problem: - Routers always forward traffic along the least-cost route as discovered by intradomain routing protocol (IGP). - Network bandwidth may not be efficiently utilized: o The least-cost route may not be the only possible route. o The least-cost route may not have enough resources to carry all the traffic. o Some links are overused while others are underused. Traffic Engineering (TE) is used when the problems result from inefficient mapping of traffic streams onto the network resources. In such networks, one part of the network suffers from congestion during long periods of time, possibly continuously or due to changing traffic patterns, while other parts of the network have spare capacity. - Reduce the overall cost of operations by more efficient use of bandwidth resources. - Prevent a situation where some parts of a service provider network are over-utilized (congested), while other parts remain underutilized. Basically, TE helps you to optimize your network resources utilization, provide a better quality of service and enhance the network and services availability. Solution: - Tunneling techniques between source and destination such that the intermediate nodes do not interfere in the routing decision. Example at L2: ATM PVCs/SVCs -->Additional cost (equipment, net operation). Scalability problems. - MPLS-TE: Each LSP (label Switched Path) has its own constraints: BW, affinities, routing constraints, etc. The LSP is computed to satisfy the set of requirements. Components of MPLS-TE 1. LSP-TE configuration in the ingress LERs 1. Configure attributes 1. Destination address, BW, required protection and restoration, affinities, affinity relations, etc. 2. Distribution of resource and topology information (e.g. OSPF with TE extensions, that reflect link characteristics and reservation states). 3. Computation of LSP-TE 1. Each router uses its routing table and topology and resource database to, taking into account the LSP-TE constraints, compute the constrained shortest path (CSPF). 2. Define recovery mode of LSP in case of fault. THE EVOLUTION FROM CIRCUITS TO PACKETS: MPLS II 4. LSP-TE setup 1. Once calculated, established with RSVP-TE. 5. (Labeled) packet forwarding (intermediate routers do not make any routing decision). OSPF-TE: Resource and Topology Information Traffic engineering information is carried in OSPF traffic engineering (OSPF-TE) link state advertisements. OSPF-TE LSAs are Type 10 Opaque LSAs, as defined in RFC 2370. Type 10 Opaque LSAs have area flooding scope. OSPF-TE LSAs have special extensions that contain information related to traffic engineering; these extensions are described in RFC 3630. The extensions consist of Type/Length/Value triplets (TLVs) containing the following information: - Link ID - IP address of the local and the remote interface for the link - TE metric for the link (by default, this is equal to the OSPF link cost) - Maximum bandwidth on the interface - Maximum reservable bandwidth on the interface - Unreserved bandwidth on the interface (BW not yet reserved) - Administrative group(s) to which the interface belongs: Affinity classes, Colors, Shared Risk Link Groups (SRLG) Resource and Topology Information Each interface may be assigned to one or multiple administrative groups Colors are often used to describe these groups (Gold, Silver, Bronze). User-friendly names can be used as well (Voice, Management, BestEffort). Names are locally significant to the router. When an ingress LSR calculates the path for an LSP-TE, it takes into account the administrative group to which a interface belongs. It is possible to include or exclude administrative groups. Resource and Topology Information Affinity - Value and mask - Can be used for: cost of the tunnel, physical types of links, physical distance, etc. Shared Risk Link Groups (SRLG) - SRLGs is a characteristic of a link that the network administrator assigns indicating that link share a common fiber or conduit. - The SRLG is flooded by the IGP and is used when backup tunnels are deployed. - Configuring SRLG membership enhances backup tunnel path selection so that a backup tunnel avoids using links that are in the same SRLG as interfaces the backup tunnel is protecting. THE EVOLUTION FROM CIRCUITS TO PACKETS: MPLS II OSPF-TE: Resource and Topology Information The following events trigger the device to send out OSPF-TE LSAs: - Change in the interface’s administrative group membership - Change in the interface’s maximum available bandwidth or maximum reservable bandwidth - Significant change in unreserved bandwidth per priority level: o The triggering module decides when to advertise changes in link states. o Set a link's reserved bandwidth thresholds - In addition, OSPF-TE LSAs can be triggered by OSPF; for example, when an interface’s link state is changed. When an interface is no longer enabled for MPLS, the device stops sending out OSPF-TE LSAs for the interface. TE Link State database LSRs use IGP extensions to create and maintain a TE Link State database (TE-LSDB or TED) that contains the TE network topology that is updated by IGP flooding whenever a change occurs (establishment of new LSP, change of available bandwidth). LSP Attributes and Requirements Used for TE In addition to the topology information in the TED, the following userspecified parameters are considered when the ingress LSR calculates a TE path for a signalled LSP: - Destination address of the egress LER - Explicit path to be used by the LSP - Class of Service (CoS) value assigned to the LSP - Bandwidth required by the LSP - Setup and Hold priority for the LSP. o The hold priority specifies how likely an established LSP is to give up its resources to another LSP. To be preempted, an LSP must have a lower hold priority than the preempting LSP's setup priority. 0 (highest priority) 7 (lowest priority). o An LSP's setup priority must be lower than or equal to its hold priority. By default, an LSP's setup priority is 7 and its hold priority is 0. - Metric for the LSP - Including or excluding links belonging to specified administrative groups CSPF algorithm Using information in the TED, as well as the attributes and requirements of the LSP, CSPF calculates a traffic-engineered path - If more than one LSP needs to be enabled, select the LSP to calculate based on its setup priority and bandwidth requirement When the ingress router invokes the CSPF algorithm, it creates a subset of the TED information based on the provided constraints 1. Prune all links which don’t have enough reservable BW 2. Prune all links which don’t contain an included administrative group color 3. Prune all links which do contain an excluded administrative group color 4. Calculate a shortest path from the ingress to egress using the subset of information THE EVOLUTION FROM CIRCUITS TO PACKETS: MPLS II LSP-TE setup Once the path is computed - Need to establish forwarding state along the path - Reserve resources along the path Two approaches - RSVP extensions - CR-LDP (Constraint-Routing LDP) RSVP extensions - How to send PATH messages on explicit routes? Introduce new object ERO (Explicit route object) similar to source routing - PATH messages (from head to tail) carries LABEL_REQUEST - RESV message for label binding CR-LDP - In addition to using label_request and label_mapping messages, use ER message similar to ERO LSP-TE setup: RSVP 1. The ingress LER sends an RSVP Path message towards the egress LER. - The Path message contains the traffic engineered path calculated by the CSPF process, specified as an EXPLICIT_ROUTE object (ERO). The Path message travels to the egress LER along the route specified in the ERO. 2. The Path message requests resource reservations on the LSRs along the path specified in the ERO. RSVP vs CR-LDP THE EVOLUTION FROM CIRCUITS TO PACKETS: MPLS II LSP Admission control On receipt of PATH message. - Router will check if there is bandwidth available to make the reservation. - If bandwidth available then RSVP accepted. - PATH message is sent to next hop (downstream). On receipt of a RESV message. - Router actually reserves the bandwidth for the TE LSP. - Label allocated and RESV message is sent to upstream node. - If pre-emption is required lower priority LSP are torn down. OSPF/ISIS updates are triggered. Recovery techniques in MPLS-TE To ensure the reliability of LSP-TE it is important to define how the LSPs will be recovered in case of faults. Global recovery - Restoration: Default recovery mode in MPLS-TE, in which the failure is notified to the ingress LER through RSVP and a routing protocol (IGP) which is used to recalculate a new path. - Protection: The ingress LER establishes two LSP-TEs, one primary and one backup. Local recovery - Protection (Fast Reroute): The LSP affected by the fault is locally rerouted by the node immediately before the failure (node or link) in the upstream direction. Recovery Time Fault Detection Time: depends on the fault detection mechanism in use and the underlying layer 1 and layer 2. Hold-Off Time: permits that MPLS waits before trying to reroute. Useful if the inferior layers have a recovery scheme. Fault Notification Time: time for the FIS (Fault Indication Signal) to be received by the node in charge of the traffic recovery. Recovery Operation Time: synchronization between network elements to coordinate. Traffic Recovery Time: time from the last recovery action until the traffic is completely recovered. Global MPLS-TE Restoration 1. Configuration of a LSP-TE in the ingress LER (destination, BW, priority, protection and restoration requirements, etc.) 2. Node R3 detects the fault in the link R3-R4 and sends a FIS (Fault Indication Signal) (RSVP message or IGP update) to the ingress LER (R1). 3. Node R1 has to find another path that complies to the constraint requirements using a CSPF (Constrained Shortest Path First) algorithm. 4. The new LSP is signalled and the traffic is rerouted. Advantages - Does not require any additional configuration of backup paths (even though these paths could have been precalculated). THE EVOLUTION FROM CIRCUITS TO PACKETS: MPLS II Drawbacks - Slowest recovery mechanism compared to other protection mechanisms, given that it implies the transmission of the Fault Indication Signal (FIS) to the ingress LER, a dynamic path computation and the signalling of the new LSP-TE. - Lack of predictability. These is no guarantee that the LSP-TE could be rerouted upon failure. A last-resort option is to relax the TE LSP contraints. Global MPLS-TE Protection The ingress LER calculates and signals a backup LSP-TE for each primary LSP before any fault is produced (both LSPs need to be disjoint paths in the elements against which the protection is defined). The constraints for the primary and backup LSP can be equal or different. When the ingress LER receives the FIS, it switches the traffic to the backup LSP. Advantages - In networks with many links and nodes and limited number of TE LSPs to protect, this mechanism is easy to deploy and requires a limited amount of provisioning. On the contrary, the use of local protection would require the protection by tunnels of every network element. - Given that the backup LSP is signalled before the failure, the path is deterministic and provides a strict control of the backup path. Drawbacks - Requires doubling the number of LSP-TE, which has an impact on scalability in full mesh networks. - In many cases cannot provide a recovery time in ms, given that it has to inform the ingress LER. - Given that the primary and backup LSP need to be disjoint, it can be that the primary does not follow the optimum path. Local MPLS-TE Protection Local protection techniques - Local: the LSP is rerouted by the node immediately upstream to the node or link that has failed. The rerouting is produced in the place closest to the fault, instead of in the ingress LER. Eliminates the need to send a FIS to the ingress LER (faster recovery time). - Protection: A backup resource is pre-assigned and pre-signalled before the fault. No need to calculate this in case of a fault. Techniques called Fast-Reroute - One-to-One backup (or detour) - Facility Backup (or bypass) LSP R1-R2-R3-R4-R5 is “fast reroutable” if it is signalled with a set of specific attributes in the RSVP Path message. - Indicates the wish to benefit from local recovery in case of a fault. - The LSP affected by a fault is locally rerouted by the node immediately upstream to the fault ->PLR (Point of Local Repair), (R2, if fault in R3 or in link R2-R3) THE EVOLUTION FROM CIRCUITS TO PACKETS: MPLS II Local MPLS-TE Protection: Fast Reroute Fast-Reroute uses backup tunnels to reroute affected net elements - When the backup tunnel ends in the next hop to the PLR, it is called a NHOP (NextHOP) backup tunnel. - If it ends in a neighbour of the next node to the PLR, it is called a NNHOP (Non NextHOP) backup tunnel. In this case, the node is called a Merge Point. Fast-Reroute: One-to-One backup At each hop, one backup LSP (called detour) is created for each fast-reroutable LSP-TE. If it only protects the LSP-TE T1, T2 and T3 against faults in the link R3- R4 (Link Prot.) and in the node R4 (Node Prot.). Each node within the fast-reroutable LSP-TE has to realize the same operation. Each detour must avoid the resources against which it protects. Operation mode - Assignment of labels for the primary LSP-TE, T1, and for the detour, D1. - When a fault occurs in R4, PLR R3 detects the fault and reroutes the traffic to the detour with its labels. Fast-Reroute: Facility backup A NHOP tunnel is required to protect against a link fault: Link Protection A NNHOP (bypass) tunnel is required to protect against a (bypassed) node fault (Node Protection) - This also protects against the link between this node and the one immediately upstream (Link Protection). Backup tunnels can be shared. Advantages: - Requires a smaller set of backup tunnels if the BW protection must be guaranteed. - The number of backup tunnels is not a function of the number of LSP-TE in the MPLS network, which preserves scalability. Requires a NHOP tunnel to protect against a link fault (Link Protection). - When the PLR detects a fault, it realizes the same label exchange as before the fault but also stacks the backup path label. Requires a NNHOP (bypass) tunnel to protect against a (bypassed) node fault (Node Protection). - The PLR puts the label that the MP (Merge Point) is waiting to receive and adds the label for the backup path. This way, the MP receives the same packet but through another interface. o The PLR needs to get to know the NNHOP LSR label through RSVP extensions. Stacking of labels in MPLS to reroute different LSP-TE over the same path. - The highest label is the backup tunnel. THE EVOLUTION FROM CIRCUITS TO PACKETS: MPLS II Path Reoptimization Once the fault detection is realized and backup tunnels are being used, it is possible that not optimum paths are used - The PLR (R1) sends a RSVP Path Error message to the ingress LER to indicate that a local reroute has occured. This provokes the search for an alternative path Fast Reroute using SRLG Configure routers that automatically create backup tunnels to avoid MPLS TE SRLGs of their protected interfaces. Backup tunnels provide link protection by rerouting traffic to the next hop bypassing failed links or avoiding SRLGs. Local MPLS-TE Protection Advantages - Local protection mechanism that can provide very fast recovery times, equivalent to SONET/SDH. o Facility and one-to-one backup are equivalent in terms of recovery time. - Can provide guaranteed BW, propagation delay and jitter in case of a fault. In the facility method, the backup capacity can be reduced drastically because backup tunnels can be shared. - Offers high granularity in the concept of Class of Restoration (CoR). - The facility method has a high scalability, given that the number of backup tunnels is a function of the number of network elements to be protected and it is not a function of the number of fast-reroutable LSP-TEs. Drawbacks - Requires configuration and establishment of a number of backup LSP-TEs which in large networks cannot be negligible. - The one-to-one method has a limited scalability in large networks. Comparison of Global and Local Protection Objective: to evaluate the number of required backup tunnels with global path protection, Fast Reroute facility backup, and one-to-one, based on the following assumptions: Network description: - D: network diameter (average number of hops between a head-end LSR and a tail-end LSR). - C: degree of connectivity (average number of neighbors). - N: total number of nodes (LSRs). Others parameters: - L: total number of links to be protected with Fast Reroute L

Use Quizgecko on...
Browser
Browser