🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

SilentSeries

Uploaded by SilentSeries

Tags

computer networking network layer internet protocols network architecture

Full Transcript

Chapter 4 Network Layer: Data Plane Computer Networking: A Top-Down Approach 8th edition Jim Kurose, Keith Ross Pearson, 2020 Network layer: our goals understand principles instantiation, implementa...

Chapter 4 Network Layer: Data Plane Computer Networking: A Top-Down Approach 8th edition Jim Kurose, Keith Ross Pearson, 2020 Network layer: our goals understand principles instantiation, implementation behind network layer in the Internet services, focusing on data IP protocol plane: NAT, middleboxes network layer service models forwarding versus routing how a router works addressing generalized forwarding Internet architecture Network Layer: 4-2 Network layer: “data plane” roadmap Network layer: overview data plane control plane  What’s inside a router input ports, switching, output ports buffer management, scheduling  IP: the Internet Protocol  Generalized Forwarding, SDN datagram format Match+action addressing OpenFlow: match+action in action network address translation IPv6  Middleboxes Network Layer: 4-3 Network-layer services and protocols  transport segment from sending mobile network to receiving host national or global ISP sender: encapsulates segments into datagrams, passes to link layer application receiver: delivers segments to transport network transport layer protocol link physical network  network layer protocols in every network link link physical physical Internet device: hosts, routers network  routers: link network physical link physical network datacenter examines header fields in all IP link physical network datagrams passing through it application moves datagrams from input ports to transport network enterprise output ports to transfer datagrams network link physical along end-end path Network Layer: 4-4 Two key network-layer functions network-layer functions: analogy: taking a trip forwarding: move packets from a  forwarding: process of getting router’s input link to appropriate through single interchange router output link  routing: process of planning trip  routing: determine route taken from source to destination by packets from source to destination routing algorithms forwarding routing Network Layer: 4-5 Network layer: data plane, control plane Data plane: Control plane  local, per-router function network-wide logic  determines how datagram determines how datagram is arriving on router input port routed among routers along end- is forwarded to router end path from source host to output port destination host values in arriving  two control-plane approaches: packet header traditional routing algorithms: 0111 1 implemented in routers 2 3 software-defined networking (SDN): implemented in (remote) servers Network Layer: 4-6 Per-router control plane Individual routing algorithm components in each and every router interact in the control plane Routing Algorithm control plane data plane values in arriving packet header 0111 1 2 3 Network Layer: 4-7 Software-Defined Networking (SDN) control plane Remote controller computes, installs forwarding tables in routers Remote Controller control plane data plane CA CA CA CA CA values in arriving packet header 0111 1 2 3 Network Layer: 4-8 Network service model Q: What service model for “channel” transporting datagrams from sender to receiver? example services for example services for a flow of individual datagrams: datagrams:  guaranteed delivery in-order datagram delivery  guaranteed delivery with guaranteed minimum bandwidth less than 40 msec delay to flow restrictions on changes in inter- packet spacing Network Layer: 4-9 Network-layer service model Quality of Service (QoS) Guarantees ? Network Service Architecture Model Bandwidth Loss Order Timing Internet best effort none no no no ATM Constant Bit Rate Constant rate yes yes yes Internet “best effort” service model ATM NoAvailable guaranteesBit Rate on: Guaranteed min no yes no Internet i. successful Intserv Guaranteeddatagram yes delivery to yesdestination yes yes (RFC ii. 1633 ) timing or order of delivery Internet Diffserv (RFC 2475) available iii. bandwidth possible possibly to end-end flow possibly no Network Layer: 4-10 Network-layer service model Quality of Service (QoS) Guarantees ? Network Service Architecture Model Bandwidth Loss Order Timing Internet best effort none no no no ATM Constant Bit Rate Constant rate yes yes yes ATM Available Bit Rate Guaranteed min no yes no Internet Intserv Guaranteed yes yes yes yes (RFC 1633) Internet Diffserv (RFC 2475) possible possibly possibly no Network Layer: 4-11 Reflections on best-effort service:  simplicity of mechanism has allowed Internet to be widely deployed adopted  sufficient provisioning of bandwidth allows performance of real-time applications (e.g., interactive voice, video) to be “good enough” for “most of the time”  replicated, application-layer distributed services (datacenters, content distribution networks) connecting close to clients’ networks, allow services to be provided from multiple locations  congestion control of “elastic” services helps It’s hard to argue with success of best-effort service model Network Layer: 4-12 Network layer: “data plane” roadmap Network layer: overview data plane control plane What’s inside a router input ports, switching, output ports buffer management, scheduling IP: the Internet Protocol datagram format  Generalized Forwarding, SDN addressing Match+action network address translation OpenFlow: match+action in action IPv6  Middleboxes Network Layer: 4-13 Router architecture overview high-level view of generic router architecture: routing, management routing control plane (software) processor operates in millisecond time frame forwarding data plane (hardware) operates in nanosecond timeframe high-speed switching fabric router input ports router output ports Network Layer: 4-14 Input port functions lookup, link layer forwarding line switch termination protocol fabric (receive) queueing physical layer: bit-level reception decentralized switching: link layer: e.g., Ethernet  using header field values, lookup output port using forwarding table in input port memory (“match plus action”) (chapter 6)  goal: complete input port processing at ‘line speed’  input port queuing: if datagrams arrive faster than forwarding rate into switch fabric Network Layer: 4-15 Input port functions lookup, link layer forwarding line switch termination protocol fabric (receive) queueing physical layer: bit-level reception decentralized switching: link layer: e.g., Ethernet  using header field values, lookup output port using forwarding table in input port memory (“match plus action”) (chapter 6)  destination-based forwarding: forward based only on destination IP address (traditional)  generalized forwarding: forward based on any set of header field values Network Layer: 4-16 Switching fabrics  transfer packet from input link to appropriate output link  switching rate: rate at which packets can be transfer from inputs to outputs often measured as multiple of input/output line rate N inputs: switching rate N times line rate desirable R (rate: NR, R ideally)...... N input ports high-speed N output ports switching fabric R R Network Layer: 4-17 Switching fabrics  transfer packet from input link to appropriate output link  switching rate: rate at which packets can be transfer from inputs to outputs often measured as multiple of input/output line rate N inputs: switching rate N times line rate desirable  three major types of switching fabrics: memory memory bus interconnection network Network Layer: 4-18 Output port queuing datagram This is a really important slide switch buffer link layer line fabric termination protocol (rate: NR) queueing (send) R  Buffering required when datagrams arrive from fabric faster than link Datagrams can be lost transmission rate. Drop policy: which due to congestion, lack of datagrams to drop if no free buffers? buffers  Scheduling discipline chooses Priority scheduling – who among queued datagrams for gets best performance, transmission network neutrality Network Layer: 4-19 Output port queuing switch switch fabric fabric at t, packets more one packet time later from input to output  buffering when arrival rate via switch exceeds output line speed  queueing (delay) and loss due to output port buffer overflow! Network Layer: 4-20 Buffer Management buffer management: switch datagram buffer link  drop: which packet to add, fabric layer line R drop when buffers are full protocol termination queueing (send) tail drop: drop arriving scheduling packet priority: drop/remove on priority basis Abstraction: queue  marking: which packets to mark to signal congestion R packet departures (ECN, RED) packet arrivals queue link (waiting area) (server) Network Layer: 4-21 Packet Scheduling: FCFS packet scheduling: deciding FCFS: packets transmitted in which packet to send next on order of arrival to output link port first come, first served priority  also known as: First-in-first- round robin out (FIFO) weighted fair queueing  real world examples? Abstraction: queue R packet packet departures arrivals queue link (waiting area) (server) Network Layer: 4-22 Scheduling policies: priority Priority scheduling: high priority queue arrivals arriving traffic classified, queued by class classify link departures any header fields can be low priority queue used for classification 2  send packet from highest arrivals 1 3 4 5 priority queue that has packet in 1 3 2 4 5 buffered packets service FCFS within priority class departures 1 3 2 4 5 Network Layer: 4-23 Scheduling policies: round robin Round Robin (RR) scheduling: arriving traffic classified, queued by class any header fields can be used for classification R server cyclically, repeatedly scans class queues, classify link departures arrivals sending one complete packet from each class (if available) in turn Network Layer: 4-24 Scheduling policies: weighted fair queueing Weighted Fair Queuing (WFQ): generalized Round Robin  each class, i, has weight, wi, w1 and gets weighted amount of service in each cycle: w2 R wi classify link departures jwj arrivals w3  minimum bandwidth guarantee (per-traffic-class) Network Layer: 4-25

Use Quizgecko on...
Browser
Browser