JTO Phase II Data Network & IT PDF
Document Details
Uploaded by ProdigiousQuantum
null
2021
Tags
Related
- Comms 3: Data Com PDF
- MCQs Related to Data Communication & Networks PDF
- The Definition, Fulfillment And Development Of Digital Media PDF
- COTN1220 Assignment 4: Conducted and Wireless Media PDF
- Data Communication and Computer Networks (20ITT301) Syllabus PDF
- TA2 CH5 Long Distance Data Transmission PDF
Summary
This document provides an overview of the JTO Phase II Data Network & IT project, focusing on BSNL's data network initiatives. It details different projects, network architectures, and services offered, including MPLS-based IP network infrastructure, broadband access, VPN services, and value-added services. The document also explains the principles and functionalities of the project, specifically related to network architecture, project 1, MPLS, and related technologies.
Full Transcript
JTO Phase II Data Network & IT Index JTO Phase - II DNIT Week 4 INDEX Chapter Name of Chapter Page No No 1 BS...
JTO Phase II Data Network & IT Index JTO Phase - II DNIT Week 4 INDEX Chapter Name of Chapter Page No No 1 BSNL Data Network 2 2 Overview of MPLS Technology & QoS 18 3 LDP 40 4 RSVP-TE 47 5 MPLS based layer-3 VPN 57 6 Cisco Router Configuration : MPLS based layer-3 VPN 74 7 Configuration using MP-iBGP/eBGP, MP-iBGP/OSPF & MP- 85 iBGP/Static/Default 8 MPLS based layer-2 VPN & 112 112 MPLS based Layer-2 VPN Configuration 9 NMS & EMS 126 10 MPLS-TP 137 11 NGN 146 12 VOIP 160 13 IMS 165 JTO Phase II (DNIT) Version 1.0 Sep 2021 Page 1 of 174 For Restricted Circulation JTO Phase II Data Network & IT BSNL Data Network 1 BSNL DATA NETWORK 1.1 OBJECTIVE The objectives of this chapter is to understand Different project for Data Network Network Architecture Project 1 and Project 3 Multiplay Broadband MNG PAN 1.2 INTRODUCTION TO BSNL DATA NETWORK BSNL has setup NIB-II to provide world class infrastructure to offer various value added services to a broader customer base county-wide that will help to accelerate the Internet revolution in India. Moreover the NIB-II has created a platform, which enables e-governance, e-banking, e-learning, etc. with the key point of Service Level Agreements & Guarantee in tune with Global standards and customer expectations. BSNL has launched multiple projects with time to make a robust and world class data network. Some of the major projects are listed below: Project 1: - MPLS based IP Network infrastructure covering 71 cities along with associated NMS, PMS, Firewall and Caching platforms. Project 2.1: Access Gateway platform using Dialup comprising of narrow band RAS and DSL equipment. Project 2.2: Access Gateway platform comprising of Broadband RAS and DSL equipment. Project 3: Messaging and Storage platform and Provisioning, Billing and Customer care and Enterprise management system. Mulitplay Broadband : Broadband Multi-Play network focuses on the augmentation of Broadband Access Network(Project 2.2of NIB-II) supporting multi-play services like Video on Demand, IP TV, VoIP, VPN service etc with guaranteed control of critical parameters like latency, throughput, jitter to ensure high grade delivery of real time, near real time, non real time and best effort services. It is similar to what has been done as part of Project 2.2 of NIB- II ,but is much bigger in scope (Almost 10 times) with advanced protocol (RPR) being used in the aggregation network MNG PAN : MNG-PAN is MPLS-TP based Next Generation Packet Aggregation Network which is being deployed/replaced along with our existing Resilient Packet Ring (RPR) and replacing OCLAN switches with OCPAN Rings. Project 2.1 and 2.2 equipments have been replaced by Multipay Broadband network equipments. SERVICES PROVIDED IN NIB-II Internet Access Leased Access Services. Digital Subscriber Line (DSL) Access Services: Broadband ―always-on- internet‖ access over copper cables. JTO Phase II (DNIT) Version 1.0 Sep 2021 Page 2 of 174 For Restricted Circulation JTO Phase II Data Network & IT BSNL Data Network Direct Ethernet Access Services: Broadband ―always-on-internet‖ access using Fiber-to-the-building. Virtual Private Network (VPN) Services Layer 2 MPLS VPN Services: Point-to-point connectivity between corporate LAN sites. Layer 3 MPLS VPN Intranet and Extranet Services: LAN interconnectivity between multiple Corporate LAN sites. Managed Customer Premises Equipment (CPE) Services. Value Added Services Encryption Services: One of the end-to-end data security features. Firewall Services: One of the security features provided to customer. Network Address Translation (NAT) Services: Service that will enable private users to access public networks. Messaging Services Data Centre Services at Bangalore, Delhi and Mumbai 1.3 Network Architecture: Core Network:The NIB-II shall constitute an integrated 2-layer IP and MPLS network. The Layer 1 network will constitute the high speed Backbone comprising of Core routers with built in redundancies supporting both TCP/IP and MPLS protocols with link capacities in accordance with the traffic justification. Its function will primarily be limited to high-speed packet forwarding between the core nodes. All the A nodes in the NIB-I shall form the Core layer. The five major nodes at Mumbai, Bangalore, Delhi, Kolkata and Chennai are configured in the full mesh with link bandwidth of STM-64. The remaining 9 nodes of Pune, Hyderabad, Ahemdabad, Luchnow, Jullundhar, Jaipur, Indore, Ernakulam and Patna are configured in dual mesh with link bandwidths of STM-64/16, (Refer Figure 1,2,3) STM 64 JTO Phase II (DNIT) Version 1.0 Sep 2021 Page 3 of 174 For Restricted Circulation JTO Phase II Data Network & IT BSNL Data Network Figure 1: A1 POPs IP MPLS Core Network STM 64 Lin Figure 2: A1+A2+A3 POPs IP MPLS Core Network JTO Phase II (DNIT) Version 1.0 Sep 2021 Page 4 of 174 For Restricted Circulation JTO Phase II Data Network & IT BSNL Data Network Figure 3: A1+A2+A3+A4 POPs IP MPLS Core Network 1.4 Project 1 (MPLS based IP Network infrastructure initially covering 71 cities along with associated NMS, PMS, Firewall and Caching platforms.) MPLS based networks provide a cost effective way to enhance customer networking quality, rather than setting up and managing individual point-to-point circuits between each office, customers need to provide only one connection from their office router to a service-provider edge router. The service-provider edge router either forwards the IP packets in the IP network or in the case of MPLS configured access labels the packets and routes them through its MPLS core to the edge closest to destination. IP/TCP connectivity shall be provided through Provider (P) routers and Provider Edge (PE) routers. The initial city-wise categorization of nodes under NIB-II Project 1 is given in Table. Node type No of Cities Name of cities Devices A1 5 Chennai, Mumbai, Core router - Cisco 12416 Bangalore, Delhi and Edge Router Cisco 7613 Kolkata A2 3 Hyderabad, Pune and Core router - Cisco 12410 Ahemdabad Edge Router Cisco 7613 A3 6 Lucknow, Jullundhar, Core router - Cisco 12410 Jaipur, Indore, Ernakulam Edge Router Cisco 7613 and Patna JTO Phase II (DNIT) Version 1.0 Sep 2021 Page 5 of 174 For Restricted Circulation JTO Phase II Data Network & IT BSNL Data Network A4 10 Coimbtore, Chandigarh, Core router - Juniper M40 Allahabad, Guwahati, Edge Router Cisco 7613 Ranchi, Bhuvneshwar, Raipur, Mangalore, Nagpur, and Vijaivwada B1 21 -- Edge Router Cisco 7613 B2 26 -- Edge Router Cisco 7613 Table 1. City-wise categorization of nodes under NIB-II Project 1 At present MPLS network has 129 Core Routers and 459 Edge Routers. MPLS based network is a 2-layer centrally managed IP backbone network designed to provide reliable routes to cover all possible designations within and outside the country. 1.4.1 Network Architecture of Project 1: The Layer - 1 Core network (A1, A2, and A3 cities) constitutes the high speed Backbone (STM-64) as shown in Figure. The Layer - 1 nodes consist of high end fully redundant Core routers, and are interfaced to Edge routers through Gigabit Ethernet interfaces. The Layer - 2 Edge network is the second layer of the IP backbone network and primarily supports MPLS edge functionality. The function of this layer is to enforce QoS and other administrative policies. This layer provides customer access through Dedicated Access and Broadband access. This layer provides connectivity to secure VPNs as well as to Internet Data Centers. Edge routers in cities are connected to the Core layer either locally through the Gigabit Ethernet interfaces or remotely through dual homed STM-16/STM-1 links. The network is envisaged to provide QoS features associated with MPLS technology with all traffic shaping and traffic engineering features. It provides peering interfaces to other domains and serves as an Internet Exchange to ISPs. It supports IP networking and Border Gateway Protocol (BGP)/ Multi Protocol Label Switching (MPLS) technology. The network supports data, voice and video applications over Layer 2 and Layer 3 MPLS VPNs. The network architecture of NIB-II provides for integration of other networks such as NIB-I and MPLS VPN of BSNL. It shall provide a common Network Management System (NMS) and VPN / Bandwidth Provisioning System for the entire integrated IP network infrastructure. The network infrastructure for NIB-II is intended to provide IP network access, both on retail as well as wholesale basis, to DSL and leased access customers. The network provides a common IP infrastructure that supports all smaller open networks and sub-networks. The Primary objectives in setting up the MPLS based IP network Building a common IP infrastructure that shall support all smaller networks and sub networks. The platform is intended to be used for convergent services, integrating data, voice and video and shall be the primary source of Internet bandwidth for ISPs, Corporate, Institutions, Government bodies and retail users. JTO Phase II (DNIT) Version 1.0 Sep 2021 Page 6 of 174 For Restricted Circulation JTO Phase II Data Network & IT BSNL Data Network Making the service very simple for customers to use even if they lack experience in IP routing, along with Service Level Agreement (SLA) offerings. Make a service very scalable and flexible to facilitate large-scale deployment. Capable of meeting a wide range of customer requirements, including security, quality of service (QoS), and any-to-any connectivity. Capable of offering fully managed services to customers. Caching Platform: The Caching platform makes the contents (HTTP, MMS, FTP etc.) available at the POP/ Core locations and saves upstream International bandwidth. The Caching platform enables faster responses from web. Services offered under Project 1: The following services are offered to customers using the MPLS based IP networks. Layer 3 MPLS VPN Services Intranet-Managed & Unmanaged Extranet Managed & Unmanaged Internet Access services Layer 2 MPLS VPN Services Ethernet over MPLS Frame relay over MPLS PPP over MPLS Cisco HDLC over MPLS (Optional) VPLS (Virtual Private LAN service) Layer 2 Any-to-Any Interworking (Except ATM) 1. Encryption Services 2. Multicast Services 3. Firewall Services 4. Network Address Translation (NAT) Services 1.5 Project 3: Messaging and Storage Service Platform, Provisioning, Billing & Customer care, Enterprise Management System (EMS) and Security System. The Core messaging system shall be the heart of NIB-II that will enable BSNL to add users across varied value added services. This shall envisage design and up gradation of the current messaging system to grow from the existing infrastructure in NIB-I supporting 650,000 users to support the increasing user base. The system supports provisioning, billing, customer care, accounting platform and billing for the complete range of IP based services mentioned and meet next-generation requirements as well. The salient aspects of the projects are summarized as follows: Setting up proven, robust, scalable Messaging Solution with best in class security components. Roll out across the country supported by 5 Messaging & associated storage systems at Delhi, Mumbai, Bangalore, Chennai and Kolkata. Designed with High Availability architecture with no single point of failure. Components of the Solution: JTO Phase II (DNIT) Version 1.0 Sep 2021 Page 7 of 174 For Restricted Circulation JTO Phase II Data Network & IT BSNL Data Network The solution consist of the following components with the items of functionality listed below: (i) Messaging a) DNS, AAA b) MMP c) LDAP (Consumer, Replicator Hub, Primary and Secondary) d) SMTP IN & OUT e) Messaging Servers f) Address Book Servers, etc. (ii) Storage a) SAN Switch & SAN Storage b) Tape Library c) Staging Servers, etc. Storage platform Various Applications servers placed at the 5 Messaging Storage locations like LDAP, AAA, EMS, Messaging, UMS & Billing etc. would require Data storage capacities for storing User‘s mailboxes, Billing data etc. Such huge storage requirements need to be met with the Fast, Reliable & Scalable storage devices that would be deployed as ―End to End High Performance Switched Architecture Fiber Channel SAN (Storage Area Networks) providing No Single Point of Failure‖. Such storage device should be compatible with all the Servers of major companies such as HP, IBM, SUN, Dell etc. so that choice of Application Servers Platform remains independent of the storage device. System Dimensioning: The user base will be serviced through 5 Messaging and associated Storage systems that will be set up in the 5 cities. Each of the cities will be connected through the IP Backbone. Since the proposed user base is envisaged to increase in a phased manner, the associated messaging system is also proposed to be upgraded in phases correspondingly. The billing system is capable of Providing electronic versions of bills to customers over the Internet. Creation/modification of service. Processing Service requests in real time and non-real time and accounting in real time. Producing flexible billing depending upon the use of service. Security Systems These include the following. Load Balancers Firewall Appliances Intrusion Detection System Antivirus system, etc. Network Operation Center (NOC) JTO Phase II (DNIT) Version 1.0 Sep 2021 Page 8 of 174 For Restricted Circulation JTO Phase II Data Network & IT BSNL Data Network The NOC shall provide facility for centralized Network Management and end-to-end Provisioning of multiple services, giving a single view of the entire network services being delivered countywide. The servers for the NOC shall be connected through a Gigabit Ethernet link from Core router with three zones of firewall within the Centre. The network is being centrally managed from Network Operation Centre (NOC) located at two sites, one of them being master and the other the disaster recovery site. The main NOC is at Bangalore with Disaster Recovery is at Pune. Interface to the NMS back-office facility is provided along with Firewall security in the Data Centre. All customer databases reside centrally at NOC. The NMS of NIB-II project 1 is the comprehensive NMS for entire NIB-II including NIB-I, MPLS VPN. Service Level Agreement (SLA): It supports SLA i.e. the level of service that the customer can expect together with any penalties to be levied by the service provider for failure to deliver. It should be possible to provide at least 4 classes of services. The SLA parameters include measurements of service delivery, availability, latency, throughput and restoration time etc. It should be possible to generate management reports providing information on customer node configuration and charges, faults and achievement against the SLAs. It is possible to deliver network management reports via a secure Website. Implementation of OSS, Messaging, Storage, Billing, EMS & Security Solutions Messaging Messaging Solution of NIB-II provides the SMTP, POP3, IMAP4, WEBMAIL, WAPMAIL and Notifications services as a Class Of Service to all the customers of NIB-II and NIB-I Message store will be content aware to support different types of services to be created by BSNL ranging from text email to multi-media messaging service Messaging solution provide flexible control of message aging to define the conditions under which messages are automatically erased Web mail interface will support multimedia message types for voice and fax mail, providing unified messaging interface in future Message Transfer Agent (MTA) isdesigned to handle peak loads without service degradation or message loss MTAs is designed to handle large message queues. There will be capability available to analyze and manage large message queues generated due to unreachability of message store (internal) and mail exchangers of other ISPs (external) or SPAM Web Hosting Web space (Data Storage) on servers based on UNIX and Microsoft for hosting HTML pages with browser Ftp access for uploading and downloading pages as per the plan. Restriction on simultaneous ftp sessions Web Interface for centralized administration by user and administrators for services, usage reports, invoice and other reports Interface for online registration of domain name Web Collocation Necessary Security measures will be implemented both from customer and BSNL‘s perspective Billing for this will be done on the basis of usage JTO Phase II (DNIT) Version 1.0 Sep 2021 Page 9 of 174 For Restricted Circulation JTO Phase II Data Network & IT BSNL Data Network One of the service differentiator will be bandwidth on which the server is collocated. Security Solution Anti-Virus solution: It will provide a mechanism to detect unknown virus. The solution will protect any Gateway and SMTP traffic from virus Notification: For mails containing repeated complaints regarding abuse from the same IP address, mail will be sent automatically to the technical contact of the assignee of that IP address Network Intrusion detection System: The NIDS detect unauthorized internal/external intrusion attempts into the data centers of NIB-II and will enable to apply appropriate policies on the firewall so as to prevent such attacks in real time. Suitable alarms will also be sent to the Security Control Console Anti Control System: It is provided for Database servers, Messaging Stores, Web-Hosting Servers and NIDS Self-protection: Must be able to prevent hackers with root/administrator access from circumventing or shutting down the security engine Resource protection: Must allow controlling of access to all system resources including data files, devices, processes/services and audit files Rights delegation: Must provide the ability to designate specific users as administrators, auditors and password managers etc with appropriate rights Program Controls: Must provide protection against Back Doors and Trojan Horses Enterprise Management System Objective of EMS is to provide a snap-shot graphical view of the health of NIB- II IT infrastructure as a whole including networking equipment, servers and services (business and process view) Reporting system generate customized reports such as event-level, performance -level and service-level reports grouped by specific data fields such as time period, location, customer, series type, device type etc Security Management display alarm and events specified by the criteria such as alarm type, vendor, service, location, source of attack, type of attack and impacted services Event Management captures all the events that are being generated across the complete IT infrastructure, correlates them and initiate corrective actions automatically, as defined System& Application Management measures the availability and performance of heterogeneous host systems on a 24x7x365 basis and initiate preventive and corrective actions automatically System& Application Management monitor and manage multiple attributes (such as status, memory usage, size and resident size, process time, threads, response time, average throughput and CPU utilization etc) of a running process and problems and perform restart when processes go down. It will generate reports on QOS and capacity planning Mediation Billing mediation is responsible for collecting usage and other charging data from the various network nodes, normalize the data into a consistent format and distribute it to other applications and billing system for processing this information JTO Phase II (DNIT) Version 1.0 Sep 2021 Page 10 of 174 For Restricted Circulation JTO Phase II Data Network & IT BSNL Data Network System will collect different parameters from different sources to provide Cflow and Netflow based collection and mediation for usage based billing of different services including MPLS-VPN, Web-hosting, Message-Hosting etc Parameters are o Bandwidth o Volume o Time of day/ day of week/ month (Peak/off peak) o Application (WWW, Email (POP3/IMAP4), Video, E-commerce etc ) o Destination o Type of Service (Gold, Silver, Bronze/best efforts) etc Database RDBMS work in fail over mode over geographical locations RDBMS work in a distributed mode across multiple servers RDBMS work in a cluster mode Billing Billing engine caters to all the billing requirements of BSNL include Retail Billing (Prepaid and Postpaid), Wholesale and third party billing, Inter connect and content billing, Dealers and Agents Commissions etc Billing system supports the preparation of detailed bill, Differential tariff, Cross product discounting, Sponsored/split billing. Bundled accounting, Hot billing/On-demand billing, Hierarchy/ Corporate billing, Discounts & Promotions, Taxes, Notification system, Dealers and Agent commissions, Content Billing Content Billing System provides BSNL subscribers to access services provided by external content providers and be able to handle the revenue sharing with the content provider within the single billing platform System allows content providers who do not have their own customer care and billing system to use the billing system of BSNL Authentication, Authorization and Accounting Irrespective of mode of access, it will manage the Authentication of all users/customers- both locally and via proxy RADIUS- and deliver the appropriate level of service to each customer It will enable defining access schemes by time-of days, days-of-week, call type (PSTN, ISDN and DSL etc.), calling number and called number etc It is integrated with Billing server for providing real time pre-paid balance management and session management across multiple sessions of multiple services of a user. 1.6 Broadband Multiplay What is Multiplay Service ? The term meaning the provisioning of different telecommunication services by Telecom Service Providers on wired and wireless networks , such as Broadband Internet access, TV,VOD, VPN, Telephony/VOIP and mobile phone service. Multi-Play is the convergence of voice, video and data services. JTO Phase II (DNIT) Version 1.0 Sep 2021 Page 11 of 174 For Restricted Circulation JTO Phase II Data Network & IT BSNL Data Network Figure 4: Multiplay Services The triple play service means providing the following services to the customer Data (Internet) Voice (VoIP and not the PSTN which is already provided on broadband also) Video BROADBAND MULTIPLAY PROJECT COMPONENTS L3PE (MCR / PE Router of NIB-2 Project 1 – Supplied by HCL) BNG – Broadband Network Gateway-Connects Multiplay Network to NIB2 Backbone (Project 1) RPR Tier-1 Switch -Provides connectivity from BNG to RPR Tier-2 Switch RPR Tier-2 Switch OC LAN Tier-2 Switch DSLAM DSL Tester Metro Core Routers Providing Access to the Broadband Multiplay NW at A1, A2 locations of NIB2 Total 12 routers implemented as Metro Ethernet Project 2 routers each at 6 locations Bangalore (A1) – 1*12416 and 1*12410 Chennai (A1) – 1*12416 and 1*12410 Kolkata (A1) – 1*12416 and 1*12410 Pune (A2) – 2*12410 Hyderabad (A2) – 2*12410 Ahmedabad (A2) – 2*12410 JTO Phase II (DNIT) Version 1.0 Sep 2021 Page 12 of 174 For Restricted Circulation JTO Phase II Data Network & IT BSNL Data Network Figure 5: Broadband Multiplay Layout Multiplay Network Connectivity for various services The Multiplay broadband network uses the existing MPLS core consisting of the Provider Edge and Provider routers. The content network provides various services like VOD,IPTV,VOIP through the specific content servers and get connected to the core network of the service provider. The network path and the traffic flow of various services are explained in the following diagrams. Figure 6: INTERNET SERVICES JTO Phase II (DNIT) Version 1.0 Sep 2021 Page 13 of 174 For Restricted Circulation JTO Phase II Data Network & IT BSNL Data Network Figure 7: VIDEO SERVICES 1.7 MNG-PAN It is MPLS Based Next Generation Packet Aggregation Network which incorporates the MPLS-TP (MPLS-Transport Profile) technology for Transport network of telecom services. MNG PAN is an advanced Packet Aggregation Network solution designed for efficient multi-service aggregate of voice, video and data traffic from various access technologies including MSAN, DSLAM, FTTx, Broadband Wireless Access, as well as 3/4G mobile network base stations. The products offer high-performance aggregation and a broad feature set, including network-wide time/clock synchronization, carrier class sub 50ms recovery resiliency, guaranteed QoS and SLA enforcement, end-to-end multi-layer OAM. The MNG-PAN or OC-PAN switches shall collect the customer traffic like voice, video and data being generated through different access Broadband networks such as DSL, FTTH, MSAN, 3G Node B, Wi-Max etc and hand over to IP-MPLS core network. The MNG-PAN switches shall be centrally managed from the EMS deployed at NOC/DR-NoC. The MNG-PAN switches shall transparently handle IPV6 traffic. The MNG-PAN switches shall have 99.999% reliability. JTO Phase II (DNIT) Version 1.0 Sep 2021 Page 14 of 174 For Restricted Circulation JTO Phase II Data Network & IT BSNL Data Network Figure 8: MNG-PAN is used for the aggregation of the traffic from various network elements being deployed in the BSNL Network Limitation of Existing Network Out of many slow speed reasons in our broadband multiplay network one reason is bottleneck at aggregation network layer due to… Heavy traffic from Access Network Layer (Cascaded DSLAMs, BSNL OLTEs, Third Party OLTEs, RNC Node-B, WiFi Hotspots, ILLs etc.) to Aggregation Network. Heavy Traffic of OCLANs, Cascaded OCLANs and its connected access network layer NEs‘ Traffic aggregating to RPR. Presently, we are having 1GE uplink connectivity from RPR T1 to BNG and we generally need to augment more uplinks between RPR T1 and BNG: No of Ports at BNG are limited with limited number of uplinks to BNG. Only 1GE RPR at most places. Low power issue in the RPR is generally being observed. Unstable working condition of RPRs Intermittent improper functioning of connected access network elements to aggregation network layer. WHY MNG PAN Increasing redundancy and capacity of the network. Bandwidth up-gradation from 1GE to 10GE or 100GE with more number of uplinks to BNG. Rings of access network equipment is also possible. Stability in the network. Large network area coverage. JTO Phase II (DNIT) Version 1.0 Sep 2021 Page 15 of 174 For Restricted Circulation JTO Phase II Data Network & IT BSNL Data Network Feasibility of future bandwidth Up-gradation and enhancement in redundancy of aggregation layer by putting two nodes uplinks. Can also be implemented at Access layer. Includes almost all features of MPLS-TP. Can also works as layer 3 network for providing ILLs. 1.7.1 Typical Architecture of MNG-PAN Figure 9: Deployment Architecture of MNG-PAN and OC-PAN *PAN – COAU: Packet Aggregation Network – Central Office Aggregation Unit switch: for connecting to BNG or to MPLS routers *OC-PAN – Other City Packet Aggregation Network switch BSNL deployed MNG-PAN Aggregation Network in the 15 cities 421 PAN-COAU & PAN 287 OC PAN The OC-PAN ring will aggregate the traffic from all nodes and hand over the traffic to the upper MNG-PAN ring. The Central Office Aggregation Unit (COAU) switch of the PAN ring will aggregate the traffic from all the PAN nodes and hand over the traffic to the IP-MPLS core. The network shall support internetworking between the MPLS-TP and IP/MPLS domains by any of the solutions MPLS-TP Termination - service termination at the edge of each domain and traffic hand over UNI. MPLS-TP is a carried over IP-MPLS: service /Tunnel not terminated at the hand- off point, hand-off to core over S-VLAN tunnels or using GRE tunnels The equipment shall support the bridging functionality between MPLS-TP and IPMPLS domains. JTO Phase II (DNIT) Version 1.0 Sep 2021 Page 16 of 174 For Restricted Circulation JTO Phase II Data Network & IT BSNL Data Network 1.8 Conclusion Today every service providers are trying to build scalable, feature-rich Data Networks that can deliver profitable value-added services like multi protocol label switching (MPLS) virtual private networks (VPNs); carrying voice, video, and data services. BSNL is also upgrading its network with time to meet these requirements effectively. JTO Phase II (DNIT) Version 1.0 Sep 2021 Page 17 of 174 For Restricted Circulation JTO Ph-II DNIT Overview of MPLS Technology and QoS 2 OVERVIEW OF MPLS TECHNOLOGY & QOS 2.1 Learning objectives This chapter introduces you to the following basic MPLS concepts: Unicast IP forwarding in traditional IP networks Architectural blocks of MPLS MPLS terminology CEF, FIB, LFIB, and LIB MPLS label assignment MPLS LDP session establishment MPLS label distribution and retention Penultimate hop popping Frame-mode MPLS operation and loop prevention Cell-mode MPLS operation, VC-merge, cell interleaving, and loop prevention Quality of Service parameters 2.2 Introduction Over the last few years, the Internet has evolved into a ubiquitous network and inspired the development of a variety of new applications in business and consumer markets. These new applications have driven the demand for increased and guaranteed bandwidth requirements in the backbone of the network. In addition to the traditional data services currently provided over the Internet, new voice and multimedia services are being developed and deployed. The Internet has emerged as the network for providing these converged services. However, the demands placed on the network by these new applications and services, in terms of speed and bandwidth, have strained the resources of the existing Internet infrastructure. This transformation of the network toward a packet- and cell-based infrastructure has introduced uncertainty into what has traditionally been a fairly deterministic network. Multiprotocol Label Switching (MPLS) has evolved from being a buzzword in the networking industry to a widely deployed technology in service provider (SP) networks. MPLS is a contemporary solution to address a multitude of problems faced by present- day networks: speed, scalability, quality of service (QoS) management, and traffic engineering. Service providers are realizing larger revenues by the implementation of service models based on the flexibility and value added services provided by MPLS solutions. MPLS also provides an elegant solution to satisfy the bandwidth management and service requirements for next-generation IP–based backbone networks. JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 18 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS In addition to the issue of resource constraints, another challenge relates to the transport of bits and bytes over the backbone to provide differentiated classes of service to users. The exponential growth in the number of users and the volume of traffic add another dimension to this problem. Class of service (CoS) and QoS issues must be addressed to in order to support the diverse requirements of the wide range of network users. In sum, despite some initial challenges, MPLS will play an important role in the routing, switching, and forwarding of packets through the next-generation network in order to meet the service demands of the network users. MPLS is an IP technology developed by Internet Engineering Task Force (IETF) to forwards traffic using labels instead of destination IP address. MPLS is a technique used by service providers to provide better and single network infrastructure for real- time traffic such as voice and video. The main advantage of utilizing MPLS is to create Virtual Private Networks. MPLS has the capacity to develop both Layer 2 and Layer 3 MPLS VPNs. Additional advantages of MPLS are traffic engineering, utilization of one unified network infrastructure, optimal traffic flow and, better IP over ATM integration. MPLS 4 is the innovation utilized by all Internet Service Providers (ISPs) in their core or backbone networks for packet forwarding. Packets can move independently, from the network algorithms of Interior Gateway Protocol (IGP) protocols, using manually configured routes. Paths built up in a MPLS network are called Label Switched Path (LSP). Each MPLS enabled router is called Label Switching Router (LSR). Redirection is typically in view of the parcel header containing the numerical estimation of the label. An MPLS label switch path is one- way and is within a single autonomous system (Autonomous System - AS) or domain. Approaches to build up MPLS LSP are: Dynamically: Dynamic LSP is built up automatically by the signalling protocol. Only the edge router (ingress router) is set up with the data required to establish paths; Statically: In this case, the administrator chooses how to forward the traffic, and what labels are utilized to distinguish resource distribution. Static LSPs expend less resources, do not require signalling protocol and don't need to store data about their state. The disadvantage is that it has to be done manually and errors may be done while configuring them. Additionally, a single device failure causes complete traffic loss. 2.3 Unicast IP Forwarding in Traditional IP Networks In traditional IP networks, routing protocols are used to distribute Layer 3 routing information. Figure 1-1 depicts a traditional IP network where network layer reachability information (NLRI) for network 172.16.10.0/24 is propagated using an IP routing protocol. Regardless of the routing protocol, packet forwarding is based on the destination address alone. Therefore, when a packet is received by the router, it determines the next- hop address using the packet's destination IP address along with the information from its own forwarding/routing table. This process of determining the next hop is repeated at each hop (router) from the source to the destination. As shown in Figure 1-1, in the data forwarding path, the following process takes place: R4 receives a data packet destined for 172.16.10.0 network. JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 19 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS R4 performs route lookup for 172.16.10.0 network in the forwarding table, and the packet is forwarded to the next-hop Router R3. R3 receives the data packet with destination 172.16.10.0, performs a route lookup for 172.16.10.0 network, and forwards the packet to next-hop Router R2. R2 receives the data packet with destination 172.16.10.0, performs a route lookup for 172.16.10.0 network, and forwards the packet to next-hop Router R1. Figure 10: Traditional IP Forwarding Operation Because R1 is directly connected to network 172.16.10.0, the router forwards the packet on to the appropriate connected interface. 2.3.1 Overview of MPLS Forwarding In MPLS enabled networks, packets are forwarded based on labels. These labels might correspond to IP destination addresses or to other parameters, such as QoS classes and source address. Labels are generated per router (and in some cases, per interface on a router) and bear local significance to the router generating them. Routers assign labels to define paths called Label Switched Paths (LSP) between endpoints. Because of this, only the routers on the edge of the MPLS network perform a routing lookup. Figure below illustrates the same network as depicted in Figure with MPLS forwarding where route table lookups are performed only by MPLS edge border routers, R1 and R4. The routers in MPLS network R1, R2, and R3 propagate updates for 172.16.10.0/24 network via an IGP routing protocol just like in traditional IP networks, assuming no filters or summarizations are not configured. This leads to the creation of an IP forwarding table. Also, because the links connecting the routers are MPLS enabled, they assign local labels for destination 172.16.10.0 and propagate them upstream to their directly connected peers using a Label Distribution Protocol (LDP); for example, R1 assigns a local label L1 and propagates it to the upstream neighbor R2. R2 and R3 similarly assign labels and propagate the same to upstream neighbors R3 and R4, respectively. Consequently, as illustrated in Figure 1-2, the routers now maintain a label forwarding table to enable labeled packet forwarding in addition to the IP routing table. The concept of upstream and downstream is explained in greater detail in the section "MPLS Terminology." JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 20 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS Figure 11: Forwarding in the MPLS Domain As shown in Figure 1-2, the following process takes place in the data forwarding path from R4 to R1: 1. R4 receives a data packet for network 172.16.10.0 and identifies that the path to the destination is MPLS enabled. Therefore, R4 forwards the packet to next-hop Router R3 after applying a label L3 (from downstream Router R3) on the packet and forwards the labeled packet to R3. 2. R3 receives the labeled packet with label L3 and swaps the label L3 with L2 and forwards the packet to R2. 3. R2 receives the labeled packet with label L2 and swaps the label L2 with L1 and forwards the packet to R1. 4. R1 is the border router between the IP and MPLS domains; therefore, R1 removes the labels on the data packet and forwards the IP packet to destination network 172.16.10.0. a) Architectural Blocks of MPLS MPLS functionality on Cisco devices is divided into two main architectural blocks: Control plane— Performs functions related to identifying reachability to destination prefixes. Therefore, the control plane contains all the Layer 3 routing information, as well as the processes within, to exchange reachability information for a specific Layer 3 prefix. Common examples of control plane functions are routing protocol information exchange like in OSPF and BGP. Hence, IP routing information exchange is a control plane function. In addition, all protocol functions that are responsible for the exchange of labels between neighboring routers function in the control plane as in label distribution protocols. Data plane— Performs the functions relating to forwarding data packets. These packets can be either Layer 3 IP packets or labeled IP packets. The information in JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 21 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS the data plane, such as label values, are derived from the control plane. Information exchange between neighboring routers creates mappings of IP destination prefixes to labels in the control plane, which is used to forward data plane labeled packets. Figure below depicts the control plane and data plane functions. Figure 12: Control Plane and Data Plane on a Router 2.3.2 MPLS Terminology This section provides an overview of the common MPLS-related terminology used for the rest of this book: Forwarding Equivalence Class (FEC) - As noted in RFC 3031(MPLS architecture), this group of packets are forwarded in the same manner (over the same path with the same forwarding treatment). MPLS Label Switch Router (LSR) - Performs the function of label switching; the LSR receives a labeled packet and swaps the label with an outgoing label and forwards the new labeled packet from the appropriate interface. The LSR, depending on its location in the MPLS domain, can either perform label disposition (removal, also called pop), label imposition (addition, also called push) or label swapping (replacing the top label in a label stack with a new outgoing label value). The LSR, depending on its location in the MPLS domain, might also perform label stack imposition or disposition. The concept of a label stack is explained later in this section. During label swapping, the LSR replaces only the top label in the label stack; the other labels in the label stack are left untouched during label swapping and forwarding operation at the LSR. MPLS Edge-Label Switch Router (E-LSR) - An LSR at the border of an MPLS domain. The ingress Edge LSR performs the functions of label imposition (push) and forwarding of a packet to destination through the MPLS-enabled domain. The egress Edge LSR performs the functions of label disposition or removal (pop) and forwarding an IP packet to the destination. Note that the imposition and disposition processes on an Edge LSR might involve label stacks versus only labels. JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 22 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS Figure 1-4 depicts the network in Figure 1-2 with all routers identified as LSRs or Edge LSRs based on their location and operation in the MPLS domain. Figure 13: LSR and Edge LSR MPLS Label Switched Path (LSP) - The path from source to destination for a data packet through an MPLS-enabled network. LSPs are unidirectional in nature. The LSP is usually derived from IGP routing information but can diverge from the IGP's preferred path to the destination. In Figure, the LSP for network 172.16.10.0/24 from R4 is R4-R3-R2-R1. Upstream and downstream - The concept of downstream and upstream are pivotal in understanding the operation of label distribution (control plane) and data forwarding in an MPLS domain. Both downstream and upstream are defined with reference to the destination network: prefix or FEC. Data intended for a particular destination network always flows downstream. Updates (routing protocol or label distribution, LDP/TDP) pertaining to a specific prefix are always propagated upstream. This is depicted in Figure where downstream with reference to the destination prefix 172.16.20.0/24 is in the path R1-R2-R3, and downstream with reference to 172.16.10.0/24 is the path R3-R2-R1. Therefore, in Figure 1-5, R2 is downstream to R1 for destination 172.16.20.0/24, and R1 is downstream to R2 for destination 172.16.10.0/24. JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 23 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS Figure 14: Upstream and Downstream MPLS labels and label stacks— An MPLS label is a 20-bit number that is assigned to a destination prefix on a router that defines the properties of the prefix as well as forwarding mechanisms that will be performed for a packet destined for the prefix. The format of an MPLS label is shown in Figure. Figure 15: MPLS Label An MPLS label consists of the following parts: 20-bit label value 3-bit experimental field 1-bit bottom-of-stack indicator 8-bit Time-to-Live field The 20-bit label value is a number assigned by the router that identifies the prefix in question. Labels can be assigned either per interface or per chassis. The 3-bit experimental field defines the QoS assigned to the FEC in question that has been assigned JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 24 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS a label. For example, the 3 experimental bits can map to the 7 IP precedence values to map the IP QoS assigned to packets as they traverse an MPLS domain. A label stack is an ordered set of labels where each label has a specific function. If the router (Edge LSR) imposes more than one label on a single IP packet, it leads to what is called a label stack, where multiple labels are imposed on a single IP packet. Therefore, the bottom-of-stack indicator identifies if the label that has been encountered is the bottom label of the label stack. The TTL field performs the same function as an IP TTL, where the packet is discarded when the TTL of the packet is 0, which prevents looping of unwanted packets in the network. Whenever a labeled packet traverses an LSR, the label TTL value is decremented by 1. The label is inserted between the Frame Header and the Layer 3 Header in the packet. Figure depicts the label imposition between the Layer 2 and Layer 3 headers in an IP packet. Figure 16: MPLS Label Imposition If the value of the S bit (bottom-of-stack indicator) in the label is 0, the router understands that a label stack implementation is in use. As previously mentioned, an LSR swaps only the top label in a label stack. An egress Edge LSR, however, continues label disposition in the label stack until it finds that the value of the S bit is set to 1, which denotes a bottom of the label stack. After the router encounters the bottom of the stack, it performs a route lookup depending on the information in the IP Layer 3 Header and appropriately forwards the packet toward the destination. In the case of an ingress Edge LSR, the Edge LSR might impose (push) more than one label to implement a label stack where each label in the label stack has a specific function. Label stacks are implemented when offering MPLS-based services such as MPLS VPN or MPLS traffic engineering. In MPLS VPN, the second label in the label stack identifies the VPN. In traffic engineering, the top label identifies the endpoint of the TE tunnel, and the second label identifies the destination. In Layer 2, VPN implementations over MPLS, such as AToM ("Any Transport over MPLS) and VPLS (Virtual Private LAN Service), the top label identifies the Tunnel Header or endpoint, and the second label identifies the VC. All generic iterations of the label stack implementation are shown in Figure below. JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 25 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS Figure 17: MPLS Label Stack 2.3.3 MPLS Control and Data Plane Components Cisco Express Forwarding (CEF) is the foundation on which MPLS and its services operate on a Cisco router. Therefore, CEF is a prerequisite to implement MPLS on all Cisco platforms except traditional ATM switches that support only data plane functionality. CEF is a proprietary switching mechanism used on Cisco routers that enhances the simplicity and the IPv4 forwarding performance of a router manifold. CEF avoids the overhead of cache rewrites in the IP Core environment by using a Forwarding Information Base (FIB) for the destination switching decision, which mirrors the entire contents of the IP routing table. There is a one-to-one mapping between FIB table and routing table entries. When CEF is used on a router, the router maintains, at a minimum, an FIB, which contains a mapping of destination networks in the routing table to appropriate next-hop adjacencies. Adjacencies are network nodes that can reach one another with a single hop across the link layer. This FIB resides in the data plane, which is the forwarding engine for packets processed by the router. In addition to the FIB, two other structures on the router are maintained, which are the Label Information Base (LIB) and Label Forwarding Information Base (LFIB). The distribution protocol in use between adjacent MPLS neighbors is responsible for the creation of entries in the LIB and LFIB. JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 26 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS The LIB functions in the control plane and is used by the label distribution protocol where IP destination prefixes in the routing table are mapped to next-hop labels that are received from downstream neighbors, as well as local labels generated by the label distribution protocol. The LFIB resides in the data plane and contains a local label to next-hop label mapping along with the outgoing interface, which is used to forward labeled packets. Information about reachability to destination networks from routing protocols is used to populate the Routing Information Base (RIB) or the routing table. The routing table, in turn, provides information for the FIB. The LIB is populated using information from the label distribution protocol and from the LIB along with information from the FIB that is used to populate the LFIB. Figure below shows the interoperation of the various tables maintained on a router. Figure 18: MPLS Control and Data Plane Components JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 27 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS 2.3.4 MPLS Operation The implementation of MPLS for data forwarding involves the following four steps: MPLS label assignment (per LSR) MPLS LDP or TDP session establishment (between LSRs/ELSRs) MPLS label distribution (using a label distribution protocol) MPLS label retention MPLS operation typically involves adjacent LSR's forming an LDP session, assigning local labels to destination prefixes and exchanging these labels over established LDP sessions. Upon completion of label exchange between adjacent LSRs, the control and data structures of MPLS, namely FIB, LIB, and LFIB, are populated, and the router is ready to forward data plane information based on label values. a) MPLS Label Assignment A label is assigned to IP networks reachable by a router and then imposed on data packets forwarded to those IP networks. IP routing protocols advertise reachability to destination networks. The same process needs to be implemented for routers or devices that are part of the MPLS domain to learn about the labels assigned to destination networks by neighboring routers. The label distribution protocol (LDP or TDP) assigns and exchanges labels between adjacent LSRs in an MPLS domain following session establishment. As previously mentioned, labels can be assigned either globally (per router) or per interface on a router. b) LDP Session Establishment Following label assignment on a router, these labels are distributed among directly connected LSRs if the interfaces between them are enabled for MPLS forwarding. This is done either by using LDP or tag distribution protocol (TDP). TDP is deprecated and, by default, LDP is the label distribution protocol. The command mpls label protocol {ldp | tdp} is configured only if LDP is not the default label distribution protocol or if you are reverting from LDP to TDP. The command can be configured in global and interface configuration mode. The interface configuration command will, however, override the global configuration. TDP and LDP function the same way but are not interoperable. It is important to note that when Cisco routers are in use, the default protocol that is running on an MPLS-enabled interface is dependent on the version of IOS running on the device; care must be taken when configuring Cisco routers in a multivendor environment. TDP uses TCP port 711 and LDP uses TCP port 646. A router might use both TDP and LDP on the same interface to enable dynamic formation of LDP or TDP peers depending on the protocol running on the interface of the peering MPLS neighbor. LDP is defined in RFC 3036 and is implemented predominantly between adjacent peers (adjacencies as defined by the IGP). In some cases, LDP sessions can also be configured between nonadjacent peers, where it is called a directed LDP session. There are four categories of LDP messages: JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 28 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS Discovery messages— Announce and sustain an LSR's presence in the network Session messages— Establish, upkeep, and tear down sessions between LSRs Advertisement messages— Advertise label mappings to FECs Notification messages— Signal errors Figure 19: LDP Session Establishment All LDP messages follow the type, length, value (TLV) format. LDP uses TCP port 646, and the LSR with the higher LDP router ID opens a connection to port 646 of another LSR: 1. LDP sessions are initiated when an LSR sends periodic hellos (using UDP multicast on 224.0.0.2) on interfaces enabled for MPLS forwarding. If another LSR is connected to that interface (and the interface enabled for MPLS), the directly connected LSR attempts to establish a session with the source of the LDP hello messages. The LSR with the higher LDP router ID is the active LSR. The active LSR attempts to open a TCP connection with the passive LSR (LSR with a lower router ID) on TCP port 646 (LDP). JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 29 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS 2. The active LSR then sends an initialization message to the passive LSR, which contains information such as the session keepalive time, label distribution method, max PDU length, and receiver's LDP ID, and if loop detection is enabled. 3. The passive LDP LSR responds with an initialization message if the parameters are acceptable. If parameters are not acceptable, the passive LDP LSR sends an error notification message. 4. Passive LSR sends keepalive message to the active LSR after sending an initialization message. 5. The active LSR sends keepalive to the passive LDP LSR, and the LDP session comes up. At this juncture, label-FEC mappings can be exchanged between the LSRs. c) MPLS Label Distribution with LDP In an MPLS domain running LDP, a label is assigned to a destination prefix found in the FIB, and it is distributed to upstream neighbors in the MPLS domain after session establishment. The labels that are of local significance on the router are exchanged with adjacent LSRs during label distribution. Label binding of a specific prefix to a local label and a next-hop label (received from downstream LSR) is then stored in the LFIB and LIB structures. The label distribution methods used in MPLS are as follows: Downstream on demand— This mode of label distribution allows an LSR to explicitly request from its downstream next-hop router a label mapping to a particular destination prefix and is thus known as downstream on demand label distribution. Unsolicited downstream— This mode of label distribution allows an LSR to distribute bindings to upstream LSRs that have not explicitly requested them and is referred to as unsolicited downstream label distribution. Figure 1-11 depicts the two modes of label distribution between R1 (Edge LSR) and R2 (LSR). In the downstream-on-demand distribution process, LSR R2 requests a label for the destination 172.16.10.0. R1 replies with a label mapping of label 17 for 172.16.10.0. In the unsolicited downstream distribution process, R1 does not wait for a request for a label mapping for prefix 172.16.10.0 but sends the label mapping information to the upstream LSR R2. JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 30 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS Figure 20: Unsolicited Downstream Versus Downstream on Demand d) MPLS Label Retention If an LSR supports liberal label retention mode, it maintains the bindings between a label and a destination prefix, which are received from downstream LSRs that might not be the next hop for that destination. If an LSR supports conservative label retention mode, it discards bindings received from downstream LSRs that are not next-hop routers for a destination prefix. Therefore, with liberal retention mode, an LSR can almost immediately start forwarding labeled packets after IGP convergence, where the numbers of labels maintained for a particular destination are large, thus consuming memory. With conservative label retention, the labels maintained are labels from the confirmed LDP or TDP next-hop neighbors, thus consuming minimal memory. 2.3.5 Special Outgoing Label Types LSRs perform the operation of label swapping, imposition, or disposition depending on their location in the MPLS domain. In certain cases, the incoming label maps to special outgoing labels that define the operation to be performed at the upstream LSR or router. These labels are propagated by the downstream LSR during label distribution to the upstream LSR. The following outlines the types of outgoing labels that can be associated with a packet: Untagged— The incoming MPLS packet is converted to an IP packet and forwarded to the destination (MPLS to IP Domain transition). This is used in the implementation of MPLS VPN. Implicit-null or POP label— This label is assigned when the top label of the incoming MPLS packet is removed and the resulting MPLS or IP packet is forwarded to the next-hop downstream router. The value for this label is 3 (20 bit label field). This label is used in MPLS networks that implement penultimate hop popping discussed in the next section. Explicit-null Label— This label is assigned to preserve the EXP value of the top label of an incoming packet. The top label is swapped with a label value of 0 (20 bit label field) and forwarded as an MPLS packet to the next-hop downstream router. This label is used in the implementation of QoS with MPLS. JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 31 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS Aggregate— In this label, the incoming MPLS packet is converted to an IP packet (by removing all labels if label stack is found on incoming packet), and an FIB (CEF) lookup is performed to identify the outgoing interface to destination. Figure illustrates the usage of these label types. Figure 21: Special Label Types 2.3.6 Penultimate Hop Popping Penultimate hop popping is performed in MPLS-based networks where the router upstream to the Edge LSR removes the top label in the label stack and forwards only the resulting packet (either labeled IP or IP packet) for a particular FEC. This process is signaled by the downstream Edge LSR during label distribution with LDP. The downstream Edge LSR distributes an implicit-null (POP) label to the upstream router, which signals it to pop the top label in the label stack and forward the resulting labeled or IP packet. When the packet is received by the Edge LSR, no lookup is performed in the LIB if the incoming packet is an IP packet. Therefore, penultimate hop popping saves a single lookup on edge routers. The operation of penultimate hop popping is depicted in Figure below. Figure 22: Penultimate Hop Popping As illustrated in Figure 1-13, the downstream Edge LSR1 distributes an implicit-null label mapping for network 172.16.10.0/24 to its upstream LSR1. Upon receiving a JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 32 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS labeled packet, LSR1 pops the top label and sends the resulting IP packet to the Edge- LSR1. 2.4 Introduction to Quality of Service (QoS) Quality of Service (QoS) is a broad term used to describe the overall experience a user or application will receive over a network. QoS involves a broad range of technologies, architecture and protocols. Network operators achieve end-to-end QoS by ensuring that network elements apply consistent treatment to traffic flows as they traverse the network. The goal of this paper is to put all of this into perspective and provide a holist view of the need for and the value of creating a QoS-enabled network. This paper will describe the relationship between the numerous technologies and acronyms used to describe QoS. This paper assumes that the reader is familiar with different networking technologies and architectures. This paper will provide answers to the following: Why is QoS-enabled network important? What considerations should be made when deploying a QoS-enabled network? Where should QoS be applied in the network? 2.5 Why is QoS important? Today, network traffic is highly diverse and each traffic type has unique requirements in terms of bandwidth, delay, jitter and availability. With the explosive growth of the Internet, most network traffic today, is IP-based. Having a single end-to-end transport protocol is beneficial because networking equipment becomes less complex to maintain resulting in lower operational costs. This benefit, however, is countered packets do not take a specific path as they traverse the network. This results in unpredictable QoS in a best-effort network. The IP protocol was originally designed to reliably get a packet to its destination with little consideration to the amount of time it takes to get there. IP networks must now transport many different types of applications require low latency, otherwise, the end-user quality may be significantly affected or in some cases, the application simply does not function at all. Consider a voice application. Voice applications originated on public telephone networks using TDM (Time Division Multiplexing) technology which has a very deterministic behavior. On TDM networks, the voice traffic experienced a low and fixed amount of delay with essentially no loss. Voice applications require this type of behavior to function properly. Voice applications also require this same level of ―TDM voice‖ quality to meet user expectations. Take this ―TDM voice‖ application and now transport it over a best-effort IP network. The best-effort IP network introduces a variable and unpredictable amount of delay to the voice packets and also drops voice packets when eth network is congested. As you can see, the best-effort IP network does not provide the behavior that the voice application requires. QoS techniques can be applied to the best-effort IP network to make it capable of supporting VoIP with acceptable, consistent and predictable voice quality. 2.6 QoS and Network Convergence Since the early 1990s there has been a movement towards network convergence, i.e., transport all services over the same network infrastructure. Traditionally, there were JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 33 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS separate, dedicated networks for different types of applications. However, many of these networks are being consolidated to reduce operational costs or improve profit margins. Not too long ago, an enterprise may have had a private TDM-based voice network, an IP network to the Internet, an ISDN video conferencing network, an SNA network and a multi-protocol (IPX, AppleTalk, etc.) LAN. Similarly, a service provider may have had a TDM-based voice network, an ATM or SONET backbone network and a Frame Relay or ISDN access network. Today, all data networks are converging on IP transport because the applications have migrated towards being IP-based. The TDM-based voice networks have also begun moving towards IP. Video conferencing is also moving towards IP albeit at a lesser pace. When the different applications had dedicated networks, QoS technologies played a smaller role because the traffic was similar in behavior and the dedicated networks were fine-tuned to meet the required behavior of the particular application. The converged network mixes different types of traffic, each with very different requirements. These different traffic types often react unfavourable together. For example, a voice application expects to experience no packet loss and a minimal but fixed amount of packet delay. The voice application operates in steady-state fashion with voice channels (or packets) being transmitted in fixed time intervals. The voice application receives this performance level when it operates over a TDM network. Now take the voice application and run it over a best-effort IP network as VoIP (Voice over IP). The best-effort IP network has varying amounts of packet loss and potentially large amounts of variable delay (typically caused by network congestion points). The IP network provides almost exactly the opposite performance required by the voice application. 2.7 QoS versus Bandwidth Some believe that QoS is not needed and that increasing bandwidth will suffice and provide good QoS for all applications. They argue that implementing QoS is complicated and adding bandwidth is simple. While there is some truth to these statements, one first must look closer at the QoS problems trying to be solved and whether adding bandwidth will solve them. If all network connections hand infinite bandwidth such that networks never became congested, then one would not need to apply QoS technologies. Indeed, there are paths of some Carrier networks that have tremendous amounts of bandwidth and have been carefully engineered to minimize congestion. However, high-bandwidth connections are not available throughout the network, from the traffic‘s source to the traffic‘s destination. This is especially true for access networks where the most commonly available bandwidth is typically only hundreds of kbps. Furthermore, bandwidth discontinuities in the network are potential congestion points resulting in variable and unpredictable QoS that a user or application experiences. Carriers have been aggressively adding bandwidth to their IP networks to meet the exploding demand of the Internet. Some carriers can offer low latency connections across their metropolitan area networks (MANs) or cross-country or continental long haul networks. Traffic must be treated consistently to achieve a subscribed QoS level. If the access network aggregation point is a congestion point in the network, the resulting end- to-end QoS can be poor even though the long-haul network may offer excellent QoS performance. JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 34 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS 2.8 Bandwidth Owner versus Renter If the network operator owns the network connection‘s copper, fiber or frequencies (wireless), adding bandwidth to provide QoS may be an attractive choice in ieu of implementing more complicated QoS traffic management mechanisms. Some interconnect technologies, such as DWDM (Dense Wave Division Multiplexing) over fiber optic connections, allow bandwidth to be added simply and cost-effectively over the existing cable infrastructure. Other interconnect technologies, such as fixed or mobile wireless are much more constrained due to frequency spectrum limitations regulated by government agencies. For example, a network operator has dark fiber in their backbone network and DWDM interfaces on their backbone switches. The operator has determined that sufficient bandwidth is no longer available to provide the network users with the required performance objectives. In this scenario, it is simpler and more cost effective to add additional wavelengths to provide QoS by increasing bandwidth than add complicated QoS traffic management mechanisms. For example, a network operator provides best-effort services and rents (leases) bandwidth from another service provider. This network operator wants to offer a premium data services with performance guarantees. Through the application of QoS mechanisms, this network operator can offer service differentiation between the premium data service and best-effort subscribers. For a lightly to moderately loaded network, this may be accomplished without increasing bandwidth. Hence, for the same fixed recurring cost of this leased bandwidth, the network operator can offer additional, higher-priced services and increase profit margins. 2.9 QoS Performance Dimensions A number of QoS parameters can be measured and monitored to determine whether a service level of offered or received is being achieved. These parameters consist of the following: Network Availability Bandwidth Delay Jitter Loss There are also QoS performance-affecting parameters that cannot be measured but provide the traffic management mechanisms for the network routes and switches. These consist of: Emission priority Discard priority Each of these QoS parameters affects the applications or end-user‘s experience. 2.10 Network Availability Network availability can have a significant affect on QoS. Simply put, if the network is unavailable, even during brief periods of time, the user or application may achieve unpredictable or undesirable performance (QoS). JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 35 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS Network availability is the summation of the availability of many items that are used to create a network. These include networking device redundancy, e.g., redundant interfaces, processor cards or power supplies in routers and switches, resilient networking protocols, multiple physical connections, e.g., fiber or copper, backup power sources, etc. network operators can increase their network‘s availability by implementing varying degrees of each of these items. 2.11 Bandwidth Bandwidth is the second most significant parameter that affect QoS. Bandwidth allocation can be subdivided in two types: Available Bandwidth Guaranteed Bandwidth a) Available Bandwidth Many network operators oversubscribe the bandwidth on their network to maximize the return on investment of their network infrastructure or leased bandwidth. Oversubscribing bandwidth means the bandwidth a user subscribed to is not always available bandwidth. This allows all users to compete for available bandwidth. They get more or less bandwidth depending upon the amount of traffic from other users on the network at any given time. Available bandwidth is a technique commonly used over consumer ADSL network, e.g,. a customer signs up for a 384kbps service that providers no QoS (bandwidth) guarantee in the SLA. The SLA points out that the 384kbps is ―typical‖ but does not make any guarantees. Under lightly loaded conditions, the user may achieve 384kbps but upon network loading, this bandwidth will not be achieved consistently. This is most noticeable during certain times of the day when more users access the network. b) Guaranteed Bandwidth Network operators offer a service that provides a guaranteed minimum bandwidth and burst bandwidth in the SLA. Because the bandwidth is guaranteed, the service is priced higher that the Available Bandwidth service. The network operator must ensure that those who subscribe to this Guaranteed Bandwidth service get preferential treatment (QoS bandwidth guarantee) over the Available Bandwidth subscribers. In some cases, the network operator separates the subscribers by different physical or logical networks, e.g., VLANs, Virtual Circuits, etc. In some cases, the Guaranteed Bandwidth service traffic may share the same network infrastructure with the Available Bandwidth service traffic. This is often the case at locations where network connections are expensive or the bandwidth is leased from another service provider. When subscribers share the same network infrastructure, the network operator must prioritize the Guaranteed Bandwidth subscriber‘s traffic over the Available Bandwidth subscriber‘s traffic so that in times of network congestion, the Guaranteed Bandwidth subscriber‘s SLAs are met. Burst bandwidth can be specified in terms of amount and duration of excess bandwidth (burst) above the guaranteed minimum. QoS mechanisms may be activated to discard JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 36 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS traffic that is consistently above the guaranteed minimum bandwidth that the subscriber agreed to in the SLA 2.12 Delay Network delay is the transit time an application experiences from the ingress point to the egress point of the network. Delay can cause significant QoS issues with applications such as voice and video, and applications such as SNA and Fax transmission that simply time-out and fail under excessive delay conditions. Some applications can compensate for finite amounts of delay but once a certain amount is exceeded, the QoS becomes compromised. For example, some networking equipment can ―spoof‖ an SNA session on a host by providing local acknowledgements when the network delay would cause the SNA session to time-out. Similarly, VoIP gateways and phones provide some local buffering to compensate for network delays. Finally, delay can be both fixed and variable. Examples of fixed delay are: Applications-based delay, e.g., voice codec processing time and IP packet creation time by the TCP/IP software stack Data transmission (queuing delay) over the physical network media at each network hop Propagation delay across the network based on transmission distance Examples of variable delays are: Ingress queuing delay for traffic entering a network node Contention with other traffic at each network node Egress queuing delay for traffic exiting a network node 2.13 Jitter Jitter is the measure of delay variation between consecutive packets for a given traffic flow. Jitter has a pronounced effect on real-time, delay-sensitive applications such as voice and video. These real-time applications expect to receive packets at a fairly constant rate with fixed delay between consecutive packets. As the arrival rate varies, the jitter impacts the application‘s performance. A minimal amount of jitter may be acceptable but as jitter increases, the application may become unusable. Some applications, such as voice gateways and IP phones can compensate for a finite amount of jitter. Since a voice application requires the audio to play out at a constant rate, the application will replay the previous voice packet until the next voice packet arrives. However, if the next packet is delayed to too long, it is simply discarded when it arrives resulting in a small amount of distorted audio. All networks introduce some jitter because of variability in delay introduced by each network node as packets are queued. However, as long as the jitter is bounded, QoS can be maintained. 2.14 Loss Loss can occur due to errors introduced by the physical transmission medium. For example, most landline connections have very low loss as measured in the Bit Error Rate (BER). However, wireless connections such as satellite, mobile or fixed wireless networks, have a high BER that varies due to environment or geographical conditions such as fog, rain, RF interference, cell handoff during roaming and physical obstacles JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 37 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS such as trees, buildings and mountains. Wireless technologies often transmit redundant information since packets will inherently get dropped some of the time due to the nature of the transmission medium. Loss can also occur when congested network nodes drop packets. Some networking protocols such as TCP (Transmission Control Protocol) offer packet loss protection by retransmitting packets that may have been dropped or corrupted by the network. When a network becomes increasingly congested, more packets are dropped and hence more TCP retransmission. If congestion continues, the network performance will significantly decrease because much of the bandwidth is being used to retransmit dropped packets. TCP will eventually reduce its transmission window size resulting in smaller and smaller packets being transmitted. This eventually will reduce congestion resulting in fewer packets being dropped. Because congestion has a direct impact on packet loss, congestion avoidance mechanisms are often deployed. Once such mechanism is called Random Early Discard (RED). RED algorithms randomly and intentionally drop packets once the traffic reaches one or more configured thresholds. RED takes advantage of the TCP protocol‘s window size throttling feature and provides more efficient congestion management for TCP-based flows. Note that RED only provides effective congestion control for applications or protocols with ―TCP-like‖ throttling mechanisms. 2.15 Application Requirements Table 1 illustrates the QoS performance dimensions required by some common applications. As you can see from this table, applications can have very different QoS requirements. As these are mixed over a common IP transport network, without applying QoS technologies, the traffic will experience unpredictable behavior. Performance Dimensions Bandwidth Sensitivity to a) Application Delay Jitter Loss VoIP Low High High Med Video High High High Med Conferencing Streaming Video High Med Med Med on Demand eBusiness Low Med Med Med (Web browsing) Med Med Low High Email Low Low Low High File transfer Med Low Low High Table 2. Application performance Dimensions a) Categorizing Applications Networked applications can be categorized based on end-user expectations or application a requirements. Some applications are between people while other applications are between a person and a networked device‘s application, e.g., and PC and a web server. Finally, some applications are between networking devices, e.g., router-to-router. Table 2 categorize applications into four different traffic categories, namely, Interactive, Responsive, Timely and Network Control. The table also includes example applications that fail into the different categories. JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 38 of 174 For Restricted Circulation JTO Phase II Data Network & IT Overview of MPLS Technology & QoS Traffic Example Category Application Network Control Critical Alarms, Routing, Billing, OAM Interactive VoIP, Interactive Gaming Video Conferencing Responsive Streaming audio/video, eCommerce Timely Email, File transfer Table 3. Application Traffic Categories 2.16 Conclusion Multiprotocol Label Switching (MPLS) is data forwarding technology that increases the speed and controls the flow of network traffic. With MPLS, data is directed through a path via labels instead of requiring complex lookups in a routing table at every stop. When data enters a traditional IP network, it moves among network nodes based on long network addresses. With this method, each router on which a data packet lands must make its own decision, based on routing tables, about the packet‘s next stop on the network. MPLS, on the other hand, assigns a label to each packet to send it along a predetermined path. JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 39 of 174 For Restricted Circulation JTO Phase II Data Network & IT LDP 3 LDP 3.1 Learning Objective This chapter will make you understand about concepts of Label Distribution Protocol, which is used in MPLS network for purpose of Label Distribution in order to set up path for data communication. 3.2 INTRODUCTION Label distribution protocol is a set of rules and procedures that one LSR can use to inform another LSR about which label will be used to forward MPLS traffic between and through them. The path set up by these bilateral agreement is called label switched path (LSP). MPLS architecture does not assume a single label distribution protocol. Following Protocols can be used : Label Distribution Protocol LDP Constraint-based Routing with LDP CR-LDP RSVP with TE extensions RSVP-TE BGP-4 3.3 LABEL DISTRIBUTION PROTOCOL LDP is a set of procedures and messages by which LSRs create LSPs through a network by mapping network layer routing information directly to data link layer switched paths. LSPs may have their two end points. One at a directly attached LSR and another at a network egress LSR i.e. number of LSRs away. A FEC be defined for each of the LSP. Each EFC contains one or more FEC elements. Each element identifies which set of incoming packets will be mapped to an LSP at the ingress router. LDP defines messages in the label distribution process and procedures for processing the messages. Label switching routers (LSRs) obtain information about incoming labels, next-hop nodes, and outgoing labels for specified FECs based on the local forwarding table. LSRs use the information to establish LSPs. Two LDP peers set up LDP sessions and exchange Label Mapping messages over the session so that they establish an LSP. When an LSR receives a Hello message from a peer, the LSR establishes an LDP adjacency with the peer may exist. An LDP adjacency maintains a peer relationship between the two LSRs. There are two types of LDP adjacencies: Local adjacency: established by exchanging Link Hello messages between two LSRs. Remote adjacency: established by exchanging Target Hello messages between two LSRs. LDP maintains the presence of a peer through this adjacencies and the type of peer depends on the type of neighbor that maintains it. A peer can be maintained by multiple neighbors, and if it is maintained by both the local adjacency and the remote adjacency, the peer type is a distant coexistence peer. Only a peer can establish an LDP session. JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 40 of 174 For Restricted Circulation JTO Phase II Data Network & IT LDP LDP Messages Two LSRs exchange the following messages: Discovery message: used to notify or maintain the presence of an LSR on an MPLS network. Session message: used to establish, maintain, or terminate an LDP session between LDP peers. Advertisement message: used to create, modify, or delete a mapping between a specific FEC and label. Notification message: used to provide advisory information or error information. LDP transmits Discovery messages using the User Datagram Protocol (UDP) and transmits Session, Advertisement, and Notification messages using the Transmission Control Protocol (TCP). 3.4 Label Space and LDP Identifier Label space A label space defines a range of labels allocated between LDP peers. LDP identifier An LDP identifier identifies a label space used by a specified LSR. An LDP identifier consists of 6 bytes including a 4-byte LSR ID and a 2-byte label space. An LDP identifier is in the format of :. LDP Router ID The router determines the LDP router ID as follows, if the mplsldp router-id command is not executed, 1. The router examines the IP addresses of all operational interfaces. 2. If these IP addresses include loopback interface addresses, the router selects the largest loopback address as the LDP router ID. 3. Otherwise, the router selects the largest IP address pertaining to an operational interface as the LDP router ID. The normal (default) method for determining the LDP router ID may result in a router ID that is not usable in certain situations. For example, the router might select an IP address as the LDP router ID that the routing protocol cannot advertise to a neighboring router. The mplsldp router-id command allows you to specify the IP address of an interface as the LDP router ID. Make sure the specified interface is operational so that its IP address can be used as the LDP router ID. When you issue the mplsldp router-id command with the force keyword, the effect of the mplsldp router-id command depends on the current state of the specified interface: If the interface is up (operational) and if its IP address is not currently the LDP router ID, the LDP router ID changes to the IP address of the interface. This forced change in the LDP router ID tears down any existing LDP sessions, releases label bindings learned via the LDP sessions, and interrupts MPLS forwarding activity associated with the bindings. If the interface is down (not operational) when the mplsldp router- id interface force command is issued, when the interface transitions to up, the JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 41 of 174 For Restricted Circulation JTO Phase II Data Network & IT LDP LDP router ID changes to the IP address of the interface. This forced change in the LDP router ID tears down any existing LDP sessions, releases label bindings learned via the LDP sessions, and interrupts MPLS forwarding activity associated with the bindings. LDP Bindings An LDP label binding is an association between a destination prefix and a label. The label used in a label binding is allocated from a set of possible labels called a label space. LDP supports two types of label spaces: Interface-specific—An interface-specific label space uses interface resources for labels. For example, label-controlled ATM (LC-ATM) interfaces use virtual path identifiers/virtual circuit identifiers (VPIs/VCIs) for labels. Depending on its configuration, an LDP platform may support zero, one, or more interface-specific label spaces. Platform-wide—An LDP platform supports a single platform-wide label space for use by interfaces that can share the same labels. For Cisco platforms, all interface types, except LC-ATM, use the platform-wide label space. 3.5 LDP Discovery Mechanisms An LDP discovery mechanism is used by LSRs to discover potential LDP peers. LDP discovery mechanisms are classified into the following types: Basic discovery mechanism: used to discover directly connected LSR peers on a link. An LSR periodically sends Link LDP Hello messages to discover LDP peers and establish local LDP sessions with the peers. The Link Hello messages are encapsulated in UDP packets with a specific multicast destination address and are sent using LDP port 646. A Link Hello message carries an LDP identifier and other information, such as the hello-hold time and transport address. If an LSR receives a Link Hello message on a specified interface, a potential LDP peer is connected to the same interface. Extended discovery mechanism: used to discover the LSR peers that are not directly connected to a local LSR. The Targeted Hello messages are encapsulated in UDP packets and carry unicast destination addresses and are sent using LDP port 646. A Targeted Hello message carries an LDP identifier and other information, such as the hello-hold time and transport address. If an LSR receives a Targeted Hello message, the LSR has a potential LDP peer. 3.6 Process of Establishing an LDP Session Two LSRs exchange Hello messages to establish an LDP session. JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 42 of 174 For Restricted Circulation JTO Phase II Data Network & IT LDP Figure 23: LDP session establishment In above figure, the process of establishing an LDP session is as follows: 1. Two LSRs exchange Hello messages. After receiving the Hello messages carrying transport addresses, the two LSRs use the transport addresses to establish an LDP session. The LSR with the larger transport address serves as the active peer and initiates a TCP connection. LSRA serves as the active peer to initiate a TCP connection and LSRB serves as the passive peer that waits for the TCP connection to initiate. 2. After the TCP connection is successfully established, LSRA sends an Initialization message to negotiate parameters used to establish an LDP session with LSRB. The main parameters include the LDP version, label advertisement mode, Keepalive hold timer value, maximum PDU length, and label space. 3. Upon receipt of the Initialization message, LSRB replies to LSRA in either of the following situations: If LSRB rejects some parameters, it sends a Notification message to terminate LDP session establishment. If LSRB accepts all parameters, it sends an Initialization message and a Keepalive message to LSRA. 4. Upon receipt of the Initialization message, LSRA performs operations in either of the following situation: If LSRA rejects some parameters after receiving the Initialization message, it sends a Notification message to terminate LDP session establishment. If LSRA accepts all parameters, it sends a Keepalive message to LSRB. After both LSRA and LSRB have accepted each other's Keepalive messages, the LDP session is successfully established. LDP peers send messages, such as Label Mapping messages, over an LDP session to exchange label information with each other to establish an LSP. Relevant standards define the label advertisement, distribution control, and retention modes. JTO Phase II (DNIT) Version 1.0 Aug 2021 Page 43 of 174 For Restricted Circulation JTO Phase II Data Network & IT LDP 3.7 Label Advertisement Modes An LSR on an MPLS network binds a label to a specific FEC and notifies its upstream LSRs of the binding. This process is called label advertisement. The label advertisement modes on upstream an