CMPE148 Midterm Review.pdf

Full Transcript

The OSI Layer, TCP/IP Layers TCP/IP Model - What every computer supports and implemented - Layers: each layer defines some kind of protocol or standard that is used when computers connect Application: protocols used when opening web browsers Transport: TCP, UDP, port #s Network:...

The OSI Layer, TCP/IP Layers TCP/IP Model - What every computer supports and implemented - Layers: each layer defines some kind of protocol or standard that is used when computers connect Application: protocols used when opening web browsers Transport: TCP, UDP, port #s Network: IP addresses and routers; how router can know where to find people and route people Data Link: MAC addresses (how hosts communicate on same network can communicate). Switch uses layer two addresses (MAC addresses) to know where to send information Physical: involves ethernet cables OSI Model - Same layers as TCP/IP with few differences Application Presentation Service Transport: same as TCP/IP Network: same as TCP/IP Data Link: same as TCP/IP Physical: same as TCP/IP Chapter 1 - Introduction 1.1 - What is the Internet? Nuts and Bolts View - At the edge: we have devices we use to connect to the Internet - hosts/end systems - Examples: computers, smartphones, streaming devices, etc. - Packet switches: forwards packets (chunks of data) between each other & between hosts & devices - Routers & switches - Communication links: connects routers, switches, hosts and end systems - Networks: links, routers, hosts, switches, end devices are assembled into networks; usually managed by an entity - Internet: “network of networks”; interconnected ISPs - Sending & receiving of messages (information) among routers, switches, hosts, and end devices are controlled by protocols - Everywhere - Describes standard of doing something; need a body that defines them - IETF: Internet Engineering Task Force; standards are called RFCs (request for comments) Internet: Services View - Internet provides interface for applications that allow them to send & receive data information to each other - From services perspective, delivers information from point A to point B - Internet applications at endpoints can be complex and sophisticated - Application-level complexity at top of services infrastructure to deliver packets from one location to another - Computing happens in application; focus on information delivery - Infrastructure: provides services to apps - Also provides programming interface to distributed applications - “Hooks” allow sending/receiving apps to “connect” to and use Internet services - Providing service options is equivalent to the U.S. postal service Protocols - Human protocols are like asking what time it is and receiving a response back (in this case, it is the time you asked for) - Rules for specific messages sent; specific acton taken when receiving a messenge Network Protocols - Analogous to human protocols - Apps, hosts, routers, switches, link exchange messages take action based on the network protocol - Protocol definition: defines the format, order of messages sent, and received among network entities & actions taken on message transmission, receipt - Set of rules that define how data is transmitted & received over a network - Protocols are essential when ensuring data is transmitted properly over networks - Ensures diff devices can communicate effectively, even if they are made by diff manufacturers or run different operating systems - What defines a protocol? - Syntax: structure/format of data (header, footer, data fields) - Semantics: meaning of each section of bits (actions to be taken when command is received) - Timing: rules for when data should be sent & how to handle delays and synchronization 1.2 - The Network Edge - Access network: connects edge devices into first hop router to the Internet - Might connect network itself to Internet - Physical media (copper wires, etc) - Modems will often rate limit how fast you send and receive - Cable network is a shared network; shared wire - If frequencies are shared, it eats into amount of data another user receives - Network of cable, fiber attaches homes to the ISP router - Homes share access network to cable headend A Closer Look at Internet Structure - Network edge: - Hosts: hosts/runs network applications - May be client that requests & receives service or server that provides a service - Servers often in data centers - Access networks, physical media: - Wired, wireles communication links - Network core: - Individual, administrator scoped networks interconnect & form the Internet Access Networks & Physical Media - Connects end-systems to Internet - Network connects device to first hop-router on path from source to destination - Types: - Residential access nets - Institutional access networks (schools, companies) - Mobile access networks (WiFi, cellphone companies) - Look for transmission rate (how fast) and to what degree must one user share that network w/ other user Access Networks: Cable Based Access - Physical cable connects multiple houses to a cable end - Signals to & from the houses are sent in the cable at different frequencies - Signals sent at different frequencies do not interfere with each other - Frequency division multiplexing (FDM): diff channels transmitted in diff frequency bands - Users often share frequencies with others - HFC: hybrid fiber cable - Asymmetric: transmits faster downstream to home than upstream; 40 Mbps - 1.2 Gbps downstream and 30-100 Mbps upstream - Modems rate lmit how fast you send and receive Access Networks: Digital Subscriber Line - Uses the existing telephone line to central office DSLAM - Connects you directly to the central office; not sharing transmission capacity or bandwidth between yourself and neighbors and central office - DSL lines are asymmetric, 24-52 Mbps downstream transmission rate & 3.5-16 Mbps upstream transmission rate - Depends on distance between central office and home - Too far: cannot do DSL to central office Access Networks: Home Networks - DSL/Cable link coming from local telco/cable network - Cable or DSL model (modulator/demodulator) on house-end of link - Wired and wireless links to devices within the home - Links are wired Ethernet; runs at 1 Gbps - Also have WiFi wireless access; runs at 10s or 100s of Mbps - Router, modem, WiFi, Ethernet combined into one box - Home devices (hosts and end systems) Wireless Access Networks - Two classes of wireless networks; local (WiFi) and wide-area (3g, 4g, 5g) - For both, there is an entity (base station) an access point to which the end devices transmitting data and receiving data from - WLAN (Wireless local area networks) - Operates within or around a building (about 100 ft) - Operates at diff speeds; 11, 54, 450 Mbps - Protocols standardized by IEEE - Wide-area cellular access networks - Operated by mobile cellular operators - Transmission distance measured in 10s of kms - Transmission rate anywhere from 1-10 Mbps per user Access Networks: Enterprise Networks - Mix of Ethernet and wireless Wifi links - Difference from home network: multiple switches and routers to handle large # of devices connected to enterprise network - Ethernet: wired access at 100 Mbps, 1 Gbps, 10 Gbps - WiFi: wireless access points at 11, 54, 450 Mbps Access Networks: Data Center Networks - Connects servers to each other and the Internet at 100s of Gbps Host: sends packets of data - Host sends data to first hop switch - Host sending function: - Host has data it wants to send - Host takes data it wants to send and breaks it into smaller chunks (packets) - Adds additional information to each chunk of data; packet header - Protocols dictate what information is added to the header - Packet + header will have length of L bits - Host transmits L bit packet into the access network at transmission rate R measured in bits per second - R varies from one type of access network to another - R is the link transmission rate, informally known as the link capacity/linked bandwidth - Packet transmission delay = time needed to transmit L-bit packet into link = L (bits) / R (bits/sec) - If one wants to send L-bit packet into link at transmission rate R, time it takes to send bits into link is (equation above) Links; Physical Media - We want to send digital bits over some physical media from sender to receiver - Bit: propagates between transmitter/receiver pairs - Physical link: what lies between the transmitter and receiver - Guided media: signals propagate in solid media: copper, fiber, coax - Unguided media: signals propagate freely (like radio waves) Twisted Pair (TP) - Ethernet or ADSL; runs at hundreds of Mbps and sometimes Gbps - Susceptible to EM noise Coaxial Cable - Carries cable network access into home and operates at 100s of Mbps - Two concentric copper conductors - Bidirectional Fiber Optic Cable - Carries light pulses - Operates at 100s of Gbps and higher - Low error rates - Ideal for communication but transmitting and receiving components tend to be more expensive than for traditional copper wires Wireless Radio - Bits modulated onto signal carried in some frequency band in EM spectrum - No physical wire - Transmissions are broadcast; any device near transmitting device may be able to receive transmitted signals - Interference concerns - Harsh environment for transmitting - Radio signals fade over distance - Frequence reflected or blocked or passes through objects - Subjects to noise generated by other devices that emit RF signals Radio Link Types: - Wireless LAN (WiFI) - 10-100’s Mbps; 10’s meters of distance - Wide-area: transmits data of 10s of Mbps over 10 km distance - Bluetooth: cable replacement - Low data rates and short distances - Terrestrial microwave: 10s of Mbps (45); point-to-point - Satellite: propagation delay - Time from when a bit is sent at sender to when its received at receiver Network Core: Packet/Circuit Switching, Internet Structure - Network core: set of routers interconnected by set of communication links - Core operation based on packet-switching - End hosts take application level messages, divides those messages into chunks of data, puts those chunks of data inside packets & sends the packets into the internet - Network forwards the packets along a path from source node to destination node (web server to laptop thats running a web browser that made request from web server) Two key network-core functions - Two key functions performed inside network core: forwarding (switching) and routing - Forwarding is a local action - Moves arriving packet from router’s input link to the appropriate router output link - Forwarding is controlled by a forwarding table - Each router has its own forwarding table - When a packet arrives, router looks inside packet for destination address & looks up the destination address in its forwarding table then transmits it onto the output link that leads to the destination - How are the contents created? - Routing: global action of determining source to destination paths taken by packets - Routing algorithms compute the paths and computes local per router forwarding tables to realize the end-to-end forwarding path Packet-switching: store and forward - Bits in a packet being transmitted from one router to the next - Packet transmission delay: takes L/R seconds to transmit (push out) L-bit packet into a link at R bps (bits per second) - Transmitted bits received and gathered at receiving end of link until full packet is received - Once fully received, can be forwarded to next hop - This is the store-and-forward operation of a packet switch network Packet-switching: queueing - Assume host A is sending packets to host C and Host B is sending packets to host E - Transmission rate R is 100 Mb/s from A to first router and B to first router - Transmission rate from first router to second router is 1.5 Mb/s - What happens as packets arrive to the first hop router? - Router can only transmit at 1.5 Mb/s and packets arrive faster than 1.5 Mbps if both A and B transmit a lot of packets at the same time - If too many packets arrive at the same time, queue of packets form at first router - Queuing occurs when work arrives faster than it can be services - Packet queues form at a routers outbound link whenever the arrival rate in bits per second on the input link exceeds the transmission rate bits per second of that output link for some period of time - Packet queues → packet delays - Packets have to wait in routers rather than forwarded to destination - Only so much memory → when queue is too long and router memory exhausted and a packet arrives, packet is dropped or lost at that router - Packet delay and loss is major source of headache for network protocols Circuit Switching - Call flows from source → destination - Before call starts, all resources within network needed for the call are allocated to that call from source to destination - Once call begins, call will have reserved enough transmission capacity for itself so that theres no queuing and delay, only propagation delay and no loss of data within network because link capacity is used for the call (image on the top) - Each link has four circuits - Call from top left to bottom right allocated to the second circuit on the top link and the first circuit on the right link - Circuits are dedicated resources; not shared w/ any other users - Wire from source to destination - Since resources are reserved for exclusive use of a call, circuits can go idle if there is no data to send - No data to send on the call, its lost - No other calls can use Circuit Switching: FDM and TDM - Done in one of two ways: FDM or TDM - FDM (Frequency Division Multiplexing) EM or optical spectrum divided into narrow frequency bands - Each call allocated to its own band and can transmit at full rate allowed by that band - TDM (time division multiplexing) - Time divided into slots - Source can transmit only during its allocated time slots but can do so at higher maximum rate of that wider frequency band Packet Switching vs. Circuit Switching (in notebook) - Is packet switching a “winner”? - Good for sending data (bursty data); source occasionally has data to send (no call set-up, resource reservation); host starts sending data it needs to send - Congestion is possible - Packet delay and loss due to buffer overflow - Protocols (TCP) reacts to congestion by decreasing sender’s sending rate when congested - Congestion and loss can be avoided/mitigated - Is it possible to provide circuit-like behavior with packet switching? - Complicated Internet Structure: “network of networks” - Hosts connect to Internet through ISPs - Access ISPs must be interconnected so that any two hosts can send packets to each other - Given a million of access ISPs, how do we connect them together - Need a way to connect them to get end-to-end paths - We could connect each access ISP to every other access ISP, but that requires O(n^2) connections; does not scale - We create one global transit ISP; each ISP at edge connects to global transit network - One access ISP reaches another access ISP through this network - If one global ISP is a viable business, they are competitors for backbone network service - Global backbone networks are interconnected with each other - Network peers with another network when they are directly interconnected - Locations where multiple networks peer with each other are called internet exchange points (peering points) - Regional networks form internet access networks closer to home and connects to global backbone 1.4 - Performance How do Packet Delay and Loss Occur? - Packets queue in router buffers, waiting for turn for transmission - Queue length grows when arrival rate to link (temporarily) exceeds output link capacity - Grows longer at router’s output link when arrival rate to link (temporarily) exceeds output link capacity - Packet loss occurs when the memory to hold queued packets fill up Packet Delay: four sources - Processing delays (nodal processing) - Delay associated with forwarding packets through a switch, forwarding table look-up, integrity checks - Queuing delay: amount of time packet has to wait to queue at an output link for transmission - Amount of time spend queuing & waiting at transmission depends on congestion level for outgoing link - Transmission delay: number of bits in the packet L divided by transmission rate R - Once a packet beings transmission, its being sent to link at transmission rate r, it takes a certain amount of time for all of the bits in packet pushed into outgoing link - Propagation delay: amount of time from when a bit first enters the sending side of the link until it pops out at the receiving side of the link - Propagates through media at the speed of light (~2 * 10^8 ms) - Formula: d/s (d is length of the physical link, s is the propagation speed) Caravan Analogy - Cars are bits, caravan is packet, car through toll booth is like transmitting a bit - Car drives (propagates) onto the next toll booth - How long is it until last car of caravan leaves first tollbooth before the entire caravan is lined up before the second tollbooth - Assumptions - Takes 12 seconds to service a car (bit transmission time) - Cars propagate at 100 km/hr - Toll booth are 100 km apart - Time to push entire carvan through toll booth onto the highway is: 12 seconds per car; 10 cars per caravan ⇒ 120 seconds to transmit caravan - Propagation delay: - Time for last car to propagate from 1st to 2nd toll booth: 100km/(100km hr) ⇒ 1 hr for last bit to propagate form first toll booth to second toll booth - Total time: 1 hr + the time to push entire caravan ⇒ 62 minutes (60 min + 2 min) Packet queueing delay - L * a: arrival rate of bits - R: service rate of bits - L*a/R ⇒ traffic intensity - Ratio of arriving bits to system’s capacity to transmit the bits - When ratio is small, small queue - When greater than 1, more work arriving on average than systems’ capacity to do that work (infinite delay) - When close to 1: delays get large very fast “Real” Internet delays and routes - Traceroute: measures and inspects what happens in terms of delay on a path from source to destination - Runs on laptop/computer - Live measurements of packet delay from sender (computer) to routers along a path toward a destination - How it works: - Sends three packets to first hop router - First hop router sends reply message and response to each packet - Traceroute sender measures RTT from when it sends the message until it gets the reply - Displays those RTT measures - Sends three packets to second hop router and continues until final destination is reached Packet Loss - Occurs when router buffers fill up & arriving packet has no place to be stored - Queue (buffer) preceding link in buffer has finite capacity - Packet arriving to full queue dropped (lost) - Lost packet may be retransmitted by the previous node by source end system or not at all Throughput - Rate (in bits/second) at which bits are being sent from a sender to a receiver - Instantaneous: rate at a given point in time (short) - Average: rate over a longer period of time - Analogous to sending fluid over a pipe - Sender sends water at a rate - Each transmission link can carry fluid at a rate; some are thin and some are larger - What to consider: When end-to-end flow carries packets over multiple pipes serially, how does the capacity of each pipe, the maximum rate at which a pipe can carry a fluid, determine the overall end-to-end throughput that a flow receives? - Scenarios: - Rs < Rc - End to end throughput limited to Rs - Rs > Rc - End-to-end throughput limited to Rc - In general: throughput limited by the capacity of the thinnest pipe (one with smallest transmission link); bottleneck link Throughput: network scenario - Individual flows interacting with each other inside network in terms of throughput - 10 servers and 10 clients w/ single connection - Length of edge of network is dedicated - Shared link w/ capacity R that fairly shared bandwidth available so that each bandwidth is R/10 - Per-connection end-to-end throughput is going to be the minimum between Rc, Rs, R divided by 10 - Since we are sharing 10 pipes, each pipe shares a capacity of R/10 1.5 - Layering, Encapsulation - Layers: each layer implements a service - Done through internal-layer actions - Relies on services probided by the layer below - Advantages: - Shows different pieces of system and their relationship with each other - Modularization eases maintenance; layer takes information in Internet from above & uses services below to implement own service - If we change how a service is implemented and keep service interfaces, we localize the changes Layered Internet Protocol Stack - Application: includes application layer protocols that control sending & receiving of messages among distributed pieces of the application - Transport: transports application layer messages from one process to another - Network layer: transports data from one end device or host to another - Internet network layer does not provide reliable transport host to host service; best effort service - Link layer: transfers data between two network devices that are at either ends of the same communication link - Physical layer: controls sending of bits into the link Serivces, Layering, Encapsulation - Applications exchanges messages to implement some application service using services of the transport layer - Transport layer: messages exchanged by transport layer from one part of network to other - Takes message from application layer & includes additional information to create new data unit - Protocol data unit called transport layer segment; segment is unit of data exchanged between entities of transport layer - Info about identifying process to which message is going to be delivered at the section because there might be a lot of processes running - Encapsulation: process of taking data unit from higher layer & adding information to create new data unit in another layer - Happens everywhere - Network layer: encapsulates transport layer segment and adds its network layer header information to create datagram - Protocol data unit used in this layer - Link layer: encapsulates datagram to add its own link layer information ⇒ frame Chapter 2 - Application Layer - Network applications: social media, Web, multi-player games, Youtube, etc. Creating a Network App - We worry about the services provided by the transport layer and the application layer interface - In general, we write programs that run on different end systems that communicate over the network Client-Server Paradigm - Server and client - Servers are always-on hosts - Permanent IP address so clients know where to contact it - Hosted in home, company, university, commercial data centers - Clients operate by contacting and communicating with a server - Intermittently connected (when phone, laptop connected to Internet; will not have permanent IP address) - They do not communicate with each other; they interact with servers - Examples are HTTP, IMAP, FTP Peer-to-Peer Architecture - No server; peers in systems that communicate with each other - Requests services from other peers and provides service in return to other peers - Seen through file sharing; peer requests files from other peers but serves files to these other peers - Intermittently connected to the Internet and changes IP address - They come and go; management is more difficult Process Communicating - Network applications consist of set of interacting pieces (either through client-server or peer-to-peer) - Will not be a stand-alone program; will consist of multiple programs that write and compile and run - When they are running, they are instantiated as a process - Process: executing version of a program; these processes are communicating with each other - In a single computer: inter-process communicating (IPC) - Separate computers (separate hosts/devices): communicate using messages - Client process: process that initiates communication - Server process: process that waits to be contacted Sockets - Process sends and receives messages to and from sockets that it creates - Sockets are like doors - Create door: send message into door and we receive messages back out of the door - Sending and receiving process rely on the underlying infrastructure (transport, network, link) when delivering messages from a socket at the sending process to the receiving process - Two sockets involved whenever a sender and receiver communicate: one on each side of that communication Addressing Processes - When you want to communicate with somebody, we need some addressing information - City, state, etc. - Phone number, area code - Need some information about how to address messages that go to other end of socket endpoint - When we create a socket, we have two important pieces of information: IP address of host and the port number - Some port numbers are associated with a specific service and protocol - Port 80 will connect to web server at that server - Port 25 connects you an email server Application-layer protocol defines… - Protocol defines format, order of messages sent and received among network entities, and actions taken on message receipt and transmission - For an application-layer protocol, we need to define the types of messages that are exchanges, syntax (fields in the message and how they are delineated), and the semantics (meaning of information in the fields) - We also need to consider the actions taken before and after sending or receiving a message - Open protocols: message syntax and semantics and actions are publicly available and known to are (examples is an RFC, where Internet protocols are defined) - Other protocols are proprietary; they are owned by a company and operation is not publicly known Transport Layer Services an App Needs - Data Integrity: - Reliable data transfer is a service needed from many applications (web transactions, file transfer) - Not all applications need reliable data transfer; voice and video can tolerate packet loss - Timing guarantee: Internet and games require low delay to be effective - Transport layer may provide delay guarantee from sending process to receiving process - Throughput: some applications need a certain amount of throughput to be effective - Streaming videos need need a certain amount of throughput to send a certain amount of bits per second requires by the video - Elastic applications: able to make use of whatever throughput they can get - Security: encryption on transported data, data integrity, etc. Transport Service Requirements: common applications Internet Transport Protocols Services - TCP service: - Reliable data transport between sending and receiving process - Flow control: sender won’t overflow amount of available buffers at a receiver - Congestion control: throttles sender when the network is overloaded - Connection-oriented: handshaking required between client and server before data begins to flow - Setup between client and server processes through handshake - Does not provide timing guarantees, throughput guarantee, and security services - UDP service: - Unreliable data transfer: no guarantees made - Best effort attempt to deliver data from sender to receiver; no promise able reliability - Does not provide reliability, flow control, congestion control, timing, throughput or security guarantees - Why do we bother with UDP? - We can build additional services that are not provided by UDP on top of UDP in the application layer Internet Applications and Transport Protocols Securing TCP - Vanilla TCP and UDP sockets: - Initial socket abstraction had no notions of security associated with it - No encryption of data sent into sockets and no notion of endpoint authentication that says who you are - Had to build it up in application layer itself to do so - TLS (Transport Layer Security) - Implemented in user application space on top of TCP sockets - Provided encryption, data integrity, and endpoint authentication services that can be used by an application Web and HTTP - A web page consists of objects that can be stored on different Web servers - Can be an HTML file, JPEG image, audio file - Web pages and reference objects are addressable by a URL - Consists of host name and a path name - HTTP Overview - HTTP: hypertext transfer protocol - Adopts client/server model - Client: web browser that requests, receives (using HTTP protocol) and displays Web objects - Server: traditional web server that only serves web pages or more general purpose server that provides numerous services - HTTP uses the transport services provided by the TCP protocol - HTTP client opens a TCP connection to a web server using port 80 - One or more messages (HTTP messages) exchanged between client and server - Then, TCP connection is closed - More formally: - Client initiates TCP connection (socket) to server on port 80 - Server accepts TCP connection from client - HTTP messages (application-layer protocol messages) are exchanged between the browser (HTTP client) and Web server (HTTP server) - TCP connection is closed - HTTP is a stateless protocol - Server does not maintain internal state about ongoing request - Single request for an object and a single reply; does not worry about steps in transactions and rolling back when it fails - Reason: protocols that maintain states are complex - Must deal with clean-up problems - Returning to initial state and resolving consistencies in the state HTTP Connection Types - HTTP CONNECTIONS BETWEEN BROWER AND SERVER ARE DIFFERENT FROM TCP CONNECTION, ITS PROVIDED BY THE TRANSPORT LAYER - Non-persistent HTTP: - TCP connection opened - At most one object sent over TCP connection - TCP connection closed - If we download multiple objects, it requires multiple TCP connections to be established - If we are downloading multiple objects, it requires multiple TCP connections. - Takes RTT first to open TCp conenction and another RTT to make request and receive response - Persistent HTTP: - TCP connection opened to server - Multiple objects can be transferred serially over the single TCP connection between client and server - Once requests and returned, TCP connection is closed Non-persistent HTTP example - Assume that the user enters a URL and asks for a webpage that contains text and has 10 JPEG images - At 1a: client initiates TCP connection with HTTP server at port 80 - 1b: HTTP server at the host that has been waiting for a TCP connection at prot 80 accepts the TCP connection and notifies the client - Notice: at 1a and 1b, no HTTP requests have been flowed yet - At step 2: HTTP client sends HTTP request message into TCP connection that has been established - HTTP message indicates that client wants to receive base HTML file - At step 3: server receives HTTP request message and forms response message that contains the requested object and sends message back to HTTP client - At step 4: HTTP server closes TCP connection - At step 5: HTTP client receives response message that contains the HTML file, displays HTML file and parses it and finds the JPEG objects - Step 6: repeats 1-5 for each of the 10 JPEG objects Non-persistent HTTP: response time - RTT (round-trip time): amount of time from when a user first enters URL to a browser until that base HTML is received and displayed - Amount of time from very small packet to travel from client to server and back to the client - In non-persistent HTTP, one RTT is needed to initiate the TCP connection and another for the HTTP request to be transmitted and received and for the first bytes of HTP response to be returned. Finally, it also has the amount of time needed for the server to transmit the file into its Internet connection - Overall, non-persistent HTTP response time = 2*RTT + file transmission time Persistent HTTP - The issues non-persistent HTTP has is that it requires two 2 RTTs - OS overhead for each TCP connection - Even if we can retrieve multiple objects in parallel, we want to get information as fast as possible - To cut this latency to 1 RTT, we use the technique called persistent connection - Persistent HTTP: - Server leaves the connection open after sending the response - Subsequent HTTP messages between same client and server are sent over the open connection without having to wait for the RTT to establish a new TCP connection - When a client has a new request to send, it sends it as soon as it encounters a referenced object - Persistent HTTP cuts response time in half to one RTT HTTP Messages - Two types of HTTP messages: request and response messages - Request message: starts with a request line that begins with a method name - Usually in ASCII, human readable format - Single request line is followed by header lines that provide additional information (host, type of browser, types of objects, connection should be kept alive) HTTP Request Message Format Other HTTP Request Messages - POST method: - Uploads completed form data - User input sent from client to server in entity body of HTTP post request message - GET method: - Includes user data in URL field of HTTP get request message - HEAD method: requests headers (only) that would be returned if a specified URL was requests with an HTTP GET method - Without response body - Useful for determine size of object that would be retrieved without actually retrieving the object - PUT method: - Uploads new objects to a server with a given URL to replace an existing object - Completely replaces file that exists at specified URL with content in entity of body of POST HTTP request message - Upload new file (object) to server HTTP Response Message - Begins with status line - Version # of HTTP protocol being used - Followed by it is the status code and a short message - Following it is the response header lines that provide additional information - Date and time response was sent, type of server, last modified field (time doc was last modified), how long document is, content type is the type of document being returned - Body is object being returned HTTP Response Status Codes - 404 Not Found: requested doc not found on server Maintaining user/server state: cookies - Web sites use cookies to maintain information about a user (user’s browser in between transactions) - Components: - Server at some point sends a cookie to a client (just a number) - Contained within cookie header line of a HTTP response message sent to client - When a client makes a request to that server, it sends the cookie value to the server in a cookie header line - Server remembers all the requests it receives and responses sent associated with that cookie value - Will have history of interactions with that user Example - Client on left makes HTTP requests to an amazon server that has a backend database that stores cookies - Client has other cookies from other websites it visited (like a cookie from eBay for example) - Client makes request to Amazon server without a cookie line - When Amazon server gets the HTTP request, it creates a cookie and stores the cookie and transaction info in database and sends HTTP response to client that includes a cookie value - Client includes its cookie value for second request to Amazon to allow Amazon server to take cookie specific action - Taking first HTTP request into account - With cookies, second reply could offer client client to offer deals for example - If the client comes back a week later and provides its cookie, Amazon takes cookie specific action HTTP cookies: comments - Cookies can be used to store user state over multiple transactions - Also can be used for authentication, remember shopping cart contents, recommendations based on past behavior - Cookies can allow websites to learn more about you - Challenge: how do we keep state? - At protocol endpoints: maintain state at sender/receiver over multiple transactions - In messages: cookies in HTTP messages carry states Web Caches - Improves user performance and decrease load on origin server and institutional access links - Widely deployed around the web - How it works: - Institution installs a web cache - Users configure browser to point to local Web cache - Whenever a browser wants to make a request to an origin server, it sends its HTTP request to the cache - If the object request is found, cache returns the object to the client (origin server not involved) - Else, cache requests object from origin server, cache received object, and caches returned object to the client Web caches (proxy servers) - Web cache acts as a client and a server - Server: with respect to the original requesting client - Client to origin server - Origin server can tell cache about the object’s allowable caching behavior - Contained in response header in an HTTP response message coming back from the origin server - Cache control header can say the maximum amount of time an object should be cached or not cached at all - Web caching is important because - It reduces response time for the client request because cache is closer to client - Reduces traffic on an sintitution’s access link because the origin server is not downloaidng content frequently Caching Example - We have institutional network & a public internet where origin servers live - We have an access between institutional network & public internet that runs at 1.54 Mbps that is the bottleneck link - RTT from institutional router to server is 2 sec - Web object size is 100k bits - Average request rate from browsers to origin servers is 15/secs at 100k bits per object - On average, data flows from public Internet into institutional networks as result of HTTP gets flows at 1.50 Mbps - That average incoming data rate is close to link access rate of 1.54 Mbps - Performance: - Access link utilization is.97. THIS IS REALLY HIGH! - If we look at utilization of links from institutional router out to clients that are connected by Gbps ethernet, their utiliation is 0.0015 - End-to-end delay: Internet delay (out to Internet, delay to origin servers) + access link delay (queuing delay associated with access link coming to institutional network) + transmission and queueing delay (associated with transmission within institution’s local area network) - 2 sec + queuing delay in order of minutes + microseconds (because LAN utilization is low) Improving User Performance - Option: buying faster access link - 1.54 Mbps to 154 Mbps - Decreases access utilization to 0.0097 - Instead of long queueing delays, we get shorter queueing delays for packets coming into the network - Problem: faster access link is expensive - Another option: install a web cache - We get lower page load times, decreased load on access link, decreased load on the origin server - Unable to quantify benefits Calculating Access Link Utilization with an end-to-end delay with a cache IN NOTEBOOK, REFER TO IT Conditional GET - Client’s own host computer and browser is use - Client has its own up-to-date copy of a piece of content - No reason for web server to send again - No transmission delay + no network resources consumed - Question: how does client know that copy it has is up to date - When making an HTTP request to a server, client includes if-modified sense field - Indicates date at which object was last retrieved from the web server - Web server responds to the request in two ways - Current: web server responds with 304 not modified message and does not send object to client - If modified and the server has a more up-to-date copy, server replies with the usual 202 okay message and includes a more recent version of the object - For both types of caches, user perceived performance is better and less network sources are used - Web caching and condition get are aimed at improved user-perceived performance; page load latency HTTP/2 - Goal: decrease delay associated with transmitting multi-object HTTP requests - Methods and status codes and most of the header fields are unchanged from HTTP 1.1 - HTTP/2: increases flexibility at server in sending objects to the client - Difference: transmission order of requested objects can be based on some kind of client specified object priority; not necessarily first come first serve - Also: servers can push unrequests but future requests objects to client in advance - Large objects divided in frames and frames can be scheduled to mitigate head of the line blocking HTTP/2: mitigating HOL blocking - Objects transmitted in first come, first serve manner by the server - First object takes really long, and rest of objects have to wait - In HTTP/2, large objects are divided into frames and frame transmissions from one object can be interleaved with transmission of frames or other objects - In this object, objects 2, 4, 3 are delivered quickly while object 1 is slightly delayed - Gives better overall performance and lower average object delay HTTP/2 to HTTP/3 - Some improvements that can be made to HTTP/2; has to do with effects of packet loss and lack of security on TCP connections - Addressed in HTTP/3 - Adds security, per-object error-and congestion control (more pipelining) over UDP - HTTP/2 over single TCP connection means that: - Recovery from packet loss still stalls all object transmissions - As in HTTP 1.1, browsers have incentive to open multiple parallel TCP connections to reduce stalling, which increases overall throughput - No security over vanilla TCP connection 2.3 - Email - Three major components: - User agents (mail client that we use as a user) - Mail servers (same as HTTP servers) - SMTP (simple mail transfer protocol; used for moving messages around to and from servers) - User agent: - Also known as mail reader/mail client - Used to compose, edit, and read emails - Messages that are read or created by client are stored on the server - Mail server: - Two sets of messages for each user - Mailbox: contains incoming messages for a user - Message queue: queue of messages that are waiting to be sent to destination SMTP server - SMTP protocol - Pushes message from user agent or from mail server to another mail server - Client server paradigm - Client: sender of email (user agent or sending mail server) - Server: receiving mail server Example: Alice sends e-mail to Bob - Alice and Bob have their own email clients (user agents) and own email servers (where they have accounts) 1) Alice uses her mail client (user agent) to compose email message and sends it 2) When its sent, Alice’s email client contacts Alice’s email server and pushes (transfers) message that Alice has written to the mail server using the SMTP protocol 3) Message sitting in Alice’s server → Alice’s server contacts Bob’s server. a) Alice’s server opens a TCP connection to connect with Bob’s server 4) Alice’s SMTP server acts as a client and sends Alice’s message over the TCP connection to Bob’s email SMTP server 5) Bob’s server places the message in Bob’s mailbox 6) At some point, Bob invokes his user agent and reads Alice’s message SMTP RFC - SMTP operates on top of TCP using TCP to reliably transfer email messages from client to server - Port 25 is the port number used for SMTP - Mail messages are directly transferred from sending server (acts as client) to the receiving server (destination server) - Three phases to message transfer after TCP set-up 1) SMTP handshaking protocol that exchanges 3 messages a) 220 message initiated by server to client over TCP connection b) Client says hello c) Server says hello back 2) SMTP transfer of messages 3) SMTP connection closed 4) TCP connection closed - command/response is like HTTP - Contains ASCII text and status code and phrase as a response Sample SMTP interaction - Interaction between Alice’s email server acting as a client at crepes.fr with Bob’s SMTP at hamburger.edu - What happens first after establishing a TCP connection is Bob’s server sends a 220 message with its host name - Alice’s SMTP server responds with hello and its host name - Bob’s server responds back with the phrase: hello to Alice’s host name and the phrase “pleased to meet you”; 250 Hello crepes.fr, pleased to meet you - In first three lines, the servers performed a handshake with each other - Lines 4-13, SMTP protocol identifies who the message is from, the recipient, the phrase DATA that tells the server that the message itself is about to begin, then message itself terminated by line that contains a period - Afterward, client is done and quits and the server responds back with a 221 message - Email message transferred and SMTP connection is closed SMTP: Observations - Comparing it to HTTP, HTTP is a pull protocol - HTTP client pulls data from the HTTP server - SMTP is a push protocol - They “push” message from client to the server - Client can mean a user agent (mail client) or a mail server pushing a message from its mail server to another mail server - Both of them are human readable; encoded in ASCII command/response, see what they do, and have status codes - Not the same as HTTP but do something similar - SMTP: has multiple objects that can be encoded into one message - HTTP: one object in each response message - SMTP uses persistent connections - Multiple email messages can be transferred over a single SMTP connection - SMTP requires messages to be in 7-bit ASCII, human readable form and message terminated with a period - SMTP server uses CRLF; determines the end of the message (the period) Mail Message Format - SMTP: protocol for exchanging email messages defined in RFC 5321 (like RFC 7231 defines HTTP) - RFC 2822 defines the syntax for e-mail messages itself, similarly to how HTML defines the syntax for web documents - In a nutshell: - Contains header portion that has a To, From, and Subject line - These lines within the body of the email are different from the SMTP MAIL FROM:, RCPT TO: commands - Header followed by blank line - Body is the message coded in ASCII characters Retrieving email: mail access protocols - Email server at destination server - IMAP: Internet Mail Access Protocol - Retrieves messages stored on the server and provides retrieval, deletion, and folders of stored messages on the server - HTTP can also be used to retrieve an email message from a web server configured to return email messages 2.4 - The Domain Name System - DNS is an application layer protocol and service - Built on top and uses services of TCP and UDP DNS: Domain Name System - People have identifiers, like name - Internet hosts and routers have identifiers as well - IP addresses (128.119.40.186) - Names like cs.usmass.du - Role of DNS is to provide translation between name and services and IP addresses - Distributed database - Contains records containing information about translation among host names, services, and IP addresses - DNS itself is hierarchy of servers spread around the Internet - Services communicate with each other to provide name translation service - DNS is implemented as an application layer server - Implemented by servers that sit at network edge rather than routers and switches inside the network - Reflects Internet design philosophy of keeping network core simple & putting complexity at the network’s edge DNS: Services and structure - DNS services: hostname to IP address translation - Also provides aliasing function; translate from externally facing names like mail.cs.ums.edu to an internal host name more complicated than this - Provides service resolution - Return IP address of a mail server associated with a domain - Load distribution: may be a # of IP addresses that are able to perform a requested service. DNS rotates among the possible IP addresses and returns one of those as the primary service; load balancing - Question: Why did we not centralize DNS? - Centralized approach is single point of failure; DNS is critical infrastructure. - Given loads on the DNS, centralized approach creates tremendous concentration of traffic - Performance is important; placing it at one location means long RTT delays to some places - Hence, it does not scale. With trillions of queries a day, a single centralized service does not have the computational capabilities, resilience, or performance that one can get with a decentralized approach Thinking About the DNS - Highly distributed high scale high performance distributed database - Performance and scale: - Needs to handle trillions of requests (mostly reads) that come in - Performance counts; milliseconds count - Highly decentralized - Hundreds and thousands of organizations that are responsible for their pieces (records) within distributed database - Reliability and security DNS: distributed, hierarchical database - Root of tree has root DNS servers - Next layer: DNS servers responsible for all.dot,.com,.org domain names; top level domains - Authoritative name servers: servers that have the responsibility for resolving names within their domain (all of pbs.org for example) - If a client wants to resolve an address for www.amazon.com - Approach: client contacts a root DNS server to get the name of the TLD server for all of the.com servers - Client then contacts TLD server to get name of authoritative server to get amazon.com - Client contacts authoritative server for amazon.com to get ip address of www.amazon.com DNS: Root name servers - Place to go when a server is not able to resolve a name - Contact of last resort; does not provide translation server but where we go to start translation - Almost like central nervous system of the Internet - Cannot function w/o it - Security is important (DNSSEC provides authentication and message integrity) - ICANN (Internet Corporation for Assigned Names and Numbers) manages the root DNS domain - 13 logical root servers around the world; each of them are actually replicated - We have close to 1k physical servers Top-Level Domain and authoritative servers - Each server in TLD are responsible for resolving one of the addresses that end with.com,.org, etc. - Associations responsible for managing these TLD domains are known as Internet registries - We go here if we want to register a new domain - Authoritative DNS servers are responsible for resolving names within an organization - They are authoritative because it is the DNS server that has authority over the organization’s names Local DNS name servers - Every host on Internet has an associated local DNS server - This is the name server that a host contacts when it wants to resolve a name - Local DNS name server responds immediately to requesting host if it has that name to address translation pair cached locally - Otherwise, it starts the resolution process DNS name resolution: iterated query - Requesting host is at engineering.nyu.edu and wants to make the request to resolve the name gaia.cs.umass.edu Steps: 1) Host at engineering.nyu.edu sends an DNS query message to the local nyu DNS server a) Query message contains the hostname to be translated (gaia.cs.umass.edu) 2) Local nyu dns resolves the name. It starts by forwarding querying message to a root DNS server. 3) Root DNS server takes note of the.edu suffix and returns to the local DNS server a list of IP addresses of TLD servers responsible for.edu 4) Local nyu.dns server resends query message to a TLD server 5) TLD server takes note of umass.edu suffix and responds with the IP address of the authoritative DNS server for UMass 6) NYU’s local DNS server resends query message again to dns.cs.umass.edu 7) Umass’s authoritative name server responds with IP address of gaia.cs.umass.edu - 8 DNS messages sent; 4 query and 4 reply - DNS caching can reduce this - Iterated query: local DNS server at nyu is iteratively querying sequence of servers until gaia.cs.umass.edu name is resolved DNS name resolution: recursive query - Rather than responding to request with “idk, ask the other server,” name server takes it upon itself to resolve query and return definitive reply Steps: 1) Local DNS server at nyu queries root server 2) Root server queries TLD server 3) TLD server queries umass authoritative name server 4) Umass authoritative name server replies to the TLD server 5) TLD server replies to root DNS server 6) Root DNS server replies to local DNS server 7) Local DNS server replies to querying host - Puts burden on server’s at upper level of hierarchy, not used in practice so iterative is used Caching DNS information - Once a DNS server learns a mapping, it caches that mapping for a some amount of time - If a future request for that mapping comes in, it immediately returns the cached reply in response to the query - Improves response time - Takes load off DNS infrastructure - Cached entries disappear from the cache after some amount of time (time to live, TTL) - Possible that if a DNS record changes, cached entries are out of date - DNS doesn’t worry about stale and out of date cached entries - They will time out eventually even if there is a bit of inaccurate information floating around - If a named host changes its IP address, that change won’t be known Internet wide until all TTLs expire - In this best-effort name-to-address translation approach, no need for costly and complicated mechanism to locate and purge out of date information from caches DNS Records - DNS Database records are a four-tuple - (name, value, type, TTL; time to live) - Different DNS records - type=A - Address record - Record contains a host name and the value contains a IP address - Used for name-to-address translation - type=NS - Name is domain name - Value is hostname of the authoritative name server for that domain - type=CNAME - Used for name aliasing; name is the alias name for some “canonical” (real) name - Example: www.ibm.com is servereast.backup2.ibm.com - Value is the canonical name - type=MX - Gives the name of a mail server associated with the domain DNS Protocol Messages - Both query and reply message have the same format - Identification: 16 bit # chosen by the querier - When response is sent in reply to a query, response takes its ID value to be the same as that of the query to indicate that this is a response to that particular query - Flag: indicates if is a query message or reply, recursion requested, if its a query, and if it’s a reply message: whether the reply is authoritative or not - Next four fields indicate # of questions and responses and remainder of the protocol message - For a query: a question resolves a hostname to an IP address - Hostname goes in this field - Resource record (RR) is for a reply to such query; resource of type A for example would be in that field Getting Info into DNS - Register name at the DNS registrar (Network Solutions) - Need a set of IP addresses for servers - Need to give name and address of authoritative name server to registrar - Registrar inserts name server’s name in NS record and its IP address in an A record into global dns database - Addresses of all other servers in network provided by authoritative name server to queries who know host name of servers - Bring up authoritative name server and populate w/ resource records for servers in network DNS Security - Protected against DDoS attacks through firewalls - DNS ensures that records entered to database are from authorized services - Authentication services play a role in protecting DNS 2.5 - BitTorrent, Peer-to-Peer Network Applications - P2P applications do not rely on an always-on server listening for connections - Arbitrary end systems communicate with each other directly - Peers request service from other peers and provide servers to other peers - Important for sustainability of the peer-to-peer network that the service provided scales at least as well as the service requests - Challenges: - Peers may join or leave the network; service comes and go - Peers IP addresses may change File Distribution: client-server vs P2P - In traditional model, file originates at the server and needs to be distributed to a # of clients - Limiting factor: bandwidth available for server to upload file and for clients to download the file File distribution time: client-server - Server uploads N copies of the file so that all clients receive the file directly from the server - Time required: N*file size / upload bandwidth that server uses - Download rates of clients: - Each client downloads the file once - File size / slowest download bandwidth is the maximum time it takes for any clients to download the file from the server - Total time: either time it takes for server to upload all the copies or if longer, time for slowest client to download one copy - Assumes clients make requests at the same time - Time to download increases linearly w/ # of clients File Distribution Time: P2P - File starts at one place on the server - Server uploads at least one copy of the complete file - Each client ends up w/ own copy of file - As an aggregate, each client needs N * F bits - In P2P model: upload bandwidth of all clients will also contribute to rate at which the file can be distributed - Whole file distribution process cannot be completed any faster than the server can upload the one copy OR any faster than slowest client can download one copy - In typical case, numerator increases linearly with N but denominator increases as additional peers join the network bc they contribute to uploading the file to additional peers P2P file distribution: BitTorrent - Each file is divided into 256 Kb chunks - Allows peers to share bits of file quickly as opposed to having to wait for entire file transfer operations to complete - As first chunk is uploaded to one peer, that peer begins sharing as they are downloading another - Torrent: set of peers exchanging chunks of a file - Tracker: server maintains which peers are participating in a torrent - When a new peer wants to join the torrent, peer has to contact the tracker to obtain list of peers who are participating - Tracker needs to be in an obvious place or else they can’t find it - Once they have the list, they can start exchanging chunks - Peer has no chunks to distribute at the beginning, needs a way to get chunks from the peers - If she has one or more chunks, she can begin uploading those chunks while continuing to download more chunks - Churn: peers coming and going from the torrent - More churn: more challenging to deliver all chunks in a timely manner - Once a peer has the entire file, they might leave the torrent - Selfish if it has not yet uplaoded as least as many chunks as it has downloaded - Seeding: providing additional seed (copy) of entire copy of file from which other peers can collect chunks - As the process progresses, different peers will have diff chunks of the file - Important that at any given time, all the chunks are still present in the torrent - Periodically, peers will ask neighbors they are connected to for a list of chunks they have - Starts with the most rare (only 1 copy of a particular copy, it is requested) - Prevents case where only copy is on a peer that leaves the torrent and preserves the torrent; otherwise none of the other peers cannot finish downloadin their turrent - Peers selective about which chunks they send - Prioritizes peers that are currently sending at the highest rate - Periodically, peer re-evaluates which of their peers are the fastest - Encourages peers to upload rapidly if they want to download chunks rapidly - Optimistic unchoke: Alice selects random peer and sends one of the chunks it’s requests - Bootstrapping process; peers that have no chunks can get started downloading the torrent - Bob asks for chunks but does not have any to download - Alice picks Bob and optimistically unchokes Bob; sends one of the chunks he still needs - Bob recomputes his top four providers and Alice is one of them - Bob reciprocates and send Alice a chunk - Bob and Alice are well-connected; Bob is one of Alice’s top-four providers & they both benefit by getting chunks of the file faster 2.6 - Video Streaming and Content Distribution Networks - Streaming video traffic is major consumer of Internet bandwidth - 80% ISP traffic is streaming video traffic - Challenges: - Scale: want to reach millions of users - Heterogeneity: some are mobile, fixed, high speed broadband connections, bandwidth poor connections (how do we deal with this?) - Solution: a distributed, application level infrastructure Multimedia: video - Sequence of encoded images (frames) taken at 24-30 frames per second - Images are usually a matrix of pixels - Pixels represented by bits and are encoded to reduce the size of the images → reduces size of the video by exploiting image redundancy - Spatial coding that exploits image redundancy within an image - In screenshot: - Rather than storing N repeated purple sky pixel values, can store single pixel value purple & the number of repeated instances - Can code between frames; if image doesnt change much between frames or changes just a bit, we just send changes between frames rather than entire new frame - CBR (constant bit rate): video recording rate over time is fixed - VBR (variable bit rate): encoding rate changes over time as amount of spatial and temporal correlation changes over time Encoding Standards: Streaming stored video - First challenge: amount of available bandwidth between client and server changes over time - Congestion in home network, access network, core network, etc. - Need to adapt to this - Delays between between source and destination in Internet between client and server are going to change over time - No circuit with fixed delay from source → destination guaranteed bandwidth between source and destination - Packet switch network sees variable delays; need to be able to adapt at client as well Steps involved in streaming a stored video: 1) Video recorded a) Assume that its constant bit-rate video; more and more video being recording over time with cumulative amount of data going up at a constant rate b) Each jump represents new frame’s worth of recorded data c) Video stored then transmitted by server 2) Video sent by server a) Transmitted at same rate as it was recorded; can be sent faster or slower 3) Video received and played out at client a) After network delay, video playout begins at receiver at same rate it was recorded - At this time, client playing out earlier part of video (frame 2) while server is sending frame 10 - Rather than downloading the entire video before playing it out, client begins play out while server is still sending (streaming) later frames in the video - With streaming, client can begin play out earlier. If the client does not watch whole video, we do not waste bandwidth transmitting portions of the video that are not viewed Streaming stored video: challenges - Continuous playout constraint: timing of playout at client side has to match timing from when video is first recorded - Piece of video must have arrived from the server to the client to be played out - Otherwise, we see the spinning dial - Source of challenge: variable delay between video server and the client (jitter). To mitigate, we use buffering to absorb changes in delay - Other challenges: - Client interactivity: pause, fast-forward, rewind, jumping through a video - Video packets being lost: they are retransmitted if we are streaming over TCP → additional delay Streaming stored video: playout buffering - Assume constant bit rate video transmitted by server at constant rate - Difference this time: network delay for each video frame is variable - With a fixed network delay, we had even staircase steps - In this case, they are not even. Longer horizontal step when network delay frame is significantly longer than a previous frame or are short when network delay of a frame is significantly shorter than a previous frame - Frames are no longer received with a timing that matches the timing needed for playout due to variable network delays - To compensate for jitter in network delay, buffers are used smooth out delay - Used by the client; client waits before beginning playout - Once client begins playing, client plays out video with timing that matches original timing - How long should client wait? - If initial client playout delay is too short and frame delays are highly variable, frame not arrive in time for its playout → starvation - If its too long, user waits longer before playout begins Streaming multimedia: DASH - Buffering is great for absorbing variable delay - When the amount of available bandwidth that exists between the client and the server is not enough to support the rate at which video is being transmitted from client to server - Need another solution: DASH (dynamic adaptive streaming over HTTP) How it works: Server side: - Video that is being streamed divided into chunks - Each chunk encoded at different encoding rates (different quality levels) and stored in separate files - Larger files are going to be associated w/ chunks of video that are encoded at higher quality; takes longer - Higher amount of bandwidth to download - Different chunks represent diff encodings are stored at diff nodes within a content distribution network - Manifest file: tells client to pick up a chunk at a particular level of encoding - CDN nodes it can go to Client side: - Periodically estimates server-to-client bandwidth thats available - Can the path support more traffic? Can I request more chunk at higher fidelity? - When the client needs a chunk, it consults the manifest and requests video one chunk at a time - Chooses maximum coding rate its estimated to be sustainable given currently available bandwidth - Can choose diff coding rates at diff points in time depending on the amount of available bandwidth at that time & can choose which server to request a chunk from - Intelligence at the client side - Client is given information from the manifest file that lists its options - client monitors performance to determine the encoding rate and the CDN node from which it makes the next request Content Distribution Networks (CDNs) How do we want to structure an application that streams videos from billions of simultaneous clients and to be chosen from catalog that has millions of videos? - Option 1: mega server that has all the videos and handles all the requests - Single point of failure - Point of congestion - Long delays between video server and location and some points on the planet - Does not scale - Option 2: build a large distributed infrastructure that stores and serves copies of video chunks at different geographically distributed sites - Application-layer content distribution network - Servers in network are loaded w/ content to serve - Manifest file or CDN dns server points a client to content that client requested - Approaches to CDN - Enter deep: CDN servers are pushed deep into many access networks at Internet’s edge - Bring home: smaller # of larger server clusters are located in POPs (points of presence) Example of streaming a video through CDM: - Copies of madmen distributed around its CDN nodes - If we want to watch an episode of madmen, netflix client app sends a request to netflix central to watch an episode - Netflix central returns a manifest file listing video chunks and locations - Netflix client app retrieves video from nearby CDN server, performing buffering and client play out - If the path is congested, netflix client chooses next chunk from another nearby server - Services like Netflix are called over-the-top (OTT) service since this is an application level service riding on top of the IP infrastructure 2.7 - Socket Programming - API available between application layer code and transport layer services Socket Programming - One and only API that sits between application and transport layer - If we want to directly access Internet’s transport layer services to send application layer messages from one part of distributed application to another, we need to use sockets - From OS pov: applications are written in user space (outside OS) - Transport layer are inside OS - Socket: door between application layer program and transport layer operating system below it - Two socket types for two transport layer services - UDP: datagram unreliable service from one process to another - TCP: provides reliable, congestion controlled and flow controlled and byte-stream oriented data transfer - Application example: 1) Client reads line of characters (data) from keyboard and sends data to server 2) Server receives data & converts character to uppercase 3) Server sends modified data (uppercase translation) back to client 4) Client receives data from server and displays it on the screen Socket Programming with UDP - UDP has no connection between the client and server - No handshaking involved before client server can communicate - When the client sends data to server, has to explicitly include IP address and port # of the server - When the server receives the datagram, has to extract out the client IP address and client port # to know who its talking to - Data can be lost or be out-of-order - From an application viewpoint, UDP is an unreliable and unordered transfer of datagrams from client to server Client/Server Socket Interaction: UDP - Client and server creates sockets to use - AF_INET means its an Internet type socket - Socket uses Internet protocol version 4 - SOCK_DGRAM means its a UDP datagram socket, not TCP socket - Not specifying port # of client socket when created; OS does this - Can use bind method to specify specific port # - From client side, create application layer message - When we send message into socket, need to explicitly attach server’s IP address and server’s port # to the message and pass that info to client socket - If we only know host name, that needs to be translated into an IP address via call to local DNS server - Some port #s are standardized; well-known port #s - Datagrams sent by clients received by server - Application layer message read out of server socket - Server learns IP address and port # of sending client - Server formulates a reply message and sends message into UDP socket - Same socket server is reading from and sending into - Sender needs to include IP address and port # for destination of datagram - Message reaches client, client reads message from server then closes the socket (remember to go back to the UDP client and server code) Socket Programming with TCP - Connection oriented - Client must contact server - Client and server must communicate with each other before data begins to flow - Server needs to be up and running and must have created a socket that welcomes the client’s contact - On the client side: - Client creates a socket that specifies its IP address, port number - When that socket is created, client process reaches out to server process at TCP level to establish connection - When a client first contacts server as part of TCP handshake, server creates a new socket specifically dedicated for communicating with that specific client - First: welcoming socket (initial point of contact for ALL clients wanting to communicate with server on a particular port #) - Secondly: new socket created by the server for future communication with that specific client for the duration of that TCP connection - Port # associated with new socket is that it has the same port # as the initial welcoming socket - TCP connection serves as a pipe between the client and the server w/ server side being the newly created socket that provides reliable, in-order, byte-stream transfer between client and server processes Client/server socket interaction: TCP - Client and server must handshake and establish a TCP connection - One end will be attached to client TCP socket and the other end attached to server-side TCP socket - Server side: - Creates socket and waited for incoming connection request from TCP client; welcoming socket (listening socket) - Socket where we wait for initial client contacts; not the socket where application layer messages flow between client and server - Server invokes accept accept method on welcoming socket; blocking call that causes server to wait until client reaches out - Client: - Specifies server name and server’s port # to where socket is going to be connected - Creating socket on client side causes TCP connection request message to be sent from client to server - Connection request message sent from within transport layer when client invokes socket method - Not sent by application itself; connection request from client is what server is waiting for - When received at the server: new socket created at server and returned to server side application which returns from wait it had been doing on accept call - connectionSocket is the newly created socket; socket the server-side application uses to communicate with client-side application - TCP level message sent from within server’s operating system (not by server-side app itself) to client to let TCP client know that a connection has been established - Client and server exchanges messages similar to UDP connection with key differences: - Server-side application uses newly created socket to communicate with client - Client communicates with server using the client-side socket it created earlier (remember to go back to TCP client and server code) Chapter 3 - The Transport Layer 3.1 - Introduction and Transport-Layer Services - UDP: connectionless, best-effort service between communicating processes - TCP: reliable, flow-and-congestion controlled connection-oriented transport Transport Services and Protocols - Logical communication between application processes running on diff hosts - Logical communication: from transport layer perspective, the two communicating sides (sender and receiver) are logically connected to each other by a direct link - In reality, host may be on diff sides of the planet (separated by diff networks w/ diff routers and links) - From logical POV, imagine sender and receiver are directly connected to each other - Channel they communicate over may be lose messages, reorder message, flip bits in message - Abstract everything that sits between the processes and look at properties of channel that connects them and how they implement their services given the channel - Household analogy: - Difference between communcating processes and hosts through this analogy - 2 houses with 12 kids in each house - Hosts are the houses, processes are the kids - Many processes running in each host - Application level messages exchanged between processes are letters sealed in envelopes; envelopes are passed between the houses - When a letter arrives to a house, Ann takes that letter and delivers to one of the children - When datagram arrives to Internet host, host needs to deliver datagram to appropriate process (hand arriving letter to one of the kids) - Network layer protocol equiv to Postal Service - Delivers letters between one house to another - What happens inside the house is the job of the transport layer - Getting messages from one house to another house is job of the postal service (network layer) - transport protocols & how sending and receiving side (actions taken on these side) - TCP, UDP: transport protocols available to Internet applications Transport Layer Actions Sender Side - Everything starts when an application layer process creates a message & drops message to socket - On the lower side of that socket is the transport layer - Transport layer takes application layer message & determines what needs to go in header fields of the transport layer segment - Creates segment and passes segment to network layer (IP protocol) - IP protocol responsible for delivering IP datagram from sending host to receiving host Receiver Side - Transport layer at receiving side receives segment from network layer - Check header value fields (make sure segment is not corrupted) - Extract application-layer message - Demultiplex message up to appropriate application-layer socket Two Principal Internet Transport Protocols - TCP: provides reliable, inorder delivery between application level processes - Subject to congestion and flow control - To implement reliability, congestion and flow control, set up connection with connection state at both sender and receiver side - UDP: user datagram protocol - Best-effort, no frills approach - Unreliable delivery, messages delivered out of order - No service that guarantees on amount of time between message being sent to a socket and when it pops out the other end - No service that provides guarantees of bandwidth between sender and receiver - Streamed video may receive guaranteed # of Mbps in throughput between sender and receiver 3.2 - Multiplexing and Demultiplexing - Demultiplexing is like an Internet host with datagrams that are arriving - The datagrams have payloads that are bound for different applications or to different protocols running in that host - Process the payloads, then they are directed to appropriate application or protocol running in that house is demultiplexing - Multiplexing is the inverse - TCP takes messages and funnels into IP - HTTP server in middle that sends HTTP messages to the client - Client has many apps running - Question: how, among all these apps when a message is sent from server to the client, that the information is demultiplexed up to the web browser and not to any of the other applications running on the client - Have to think about demultiplexing at the server; multiple clients send HTTP messages back to the server - Messages may need to be demultiplexed up to diff processes at the server; each are responsible for communicating back to the clients Multiplexing/Demultiplexing - P1 and P3 are communicating and P2 and P4 are communicating - On multiplexing side: at host in the center, we see that P1 and P2 are sending down through their sockets through transport layer - Transport layer multiplex data coming in from P1 and P2, take that data & put it into segments and add info into transport header used for later demultiplexing - On demultiplexing side: at the receiver (when packets are datagrams are received at host in the center), we perform the demultiplexing operation - Transport layer uses header information to deliver contents of received segments to correct socket How Demultiplexing Works - When a host receives an IP datagram, each datagram has a source IP address - IP address of sender of datagram and destination IP address (host where we are doing the processing) - Each datagram carries one transport layer segment; has a header - Interested in the source port # and the destination port # - Host uses IP addresses and port #s to direct segment to appropriate socket Connectionless Demultiplexing - Recall that when creating a socket, application has to specify host-local port number - When creating a datagram that is sent into a socket, specify where the datagram is destined - This is where we specify the destination IP address and destination port # (not local host port #) - When the receiving host receives the UDP segment: - Checks destination port # - Directs UDP segment to te socket with that port # - We can have multiple clients sending datagrams to same UDP port # at a destination - If this happens, UDP datagrams w/ same destination port # (even if they are coming from different source IP addresses and diff source port #s), they are directed to the same socket at receiving host bc demultiplexing only happens in case of UDP on basis of destination port # Connectionless demultiplexing example - P1 and P3 communicate with each other and P1 and P4 communicate with each other - Datagrams exchanged from P1 to P3 & from P1 back to P3 - - Source port # is the port # associated with the socket used by the sender (source) - Destination is destinated to 6428; datagram socket of P1 in the middle - When P1 replies back to P3: destination port # is taken from source port # from datagram at arriving datagram to which datagram is a reply - Destination port of uppermost datagram has destination port 9157 and source port of 6428; port # associated with datagram socket of P1 Connection-oriented demultiplexing - We have a sending and receiving side - To identify sender and receiver, we do it on basis of IP address, sending port # and receiving port # - TCP socket identified through a 4-tuple - Source IP address, source port #, dest IP address, dest port # - When we do demultiplexing, receiver uses all four values in 4-tuple to direct segment to appropriate socket - Server can have many simultaneous TCP sockets - Each socket identified by its own 4-tuple - Each socket associated w/ different connecting client process Connection-oriented demultiplexing example - Apache HTTP server that exchanges HTTP messages over TCP with host IP with address A & host IP with address C - Address of HTTP server is B - In datagram flowing from left to right (from P3 to P4) - Source IP address associated with that datagram is A (sender of datagram, IP address of A) with source port # of A is 9157 (local port # associated with socket P3 created) - In destination field, destination field IP address is B (for the Apache server) with port # 80 (port # associated with HTTP services) - In the reply from P4 back to P3: - Source IP address is B w/ port # 80 - Destination is IP address A w/ destination port # 9157 - Note: destination port # in all three is 80 - Critical; since its connection-oriented, we demultiplex on 4-tuple - Each 4-tuple are unique; they are demultiplexed to P4, P5, and P6 (to different sockets) 3.3 - Connectionless Transport: UDP - No frills, bare bones Internet protocol - Simple because it provides “best effort” service - Sends segments and hopes they get to the other side; they can get lost and delivered out of order - No need for handshaking between UDP sender and receiver - No need for sender & receiver to share states → connectionless - UDP segments handled independently of all other arriving segments - UDP is used because: - No connection establishment delay (time between UDP sender wants to talk between UDP receiver & when data flows) - UDP sender sends datagram without waiting for conenction establishment delay - No connection state shared between sender and receiver → UDP headers are simple so theres little overhead - UDP does not provide congestion control; can send as many datagrams as it wants and as fast as it wants - If network is congested, it can still function UDP: User Datagram Protocol - UDP is useful for set of applications - Streaming multimedia apps are tolerant to certain segment loss but are rate sensitive; cannot congestion control too strongly - DNS and SNMP: they operate when network is in a compromised or congested state - If we need reliable transfer, its possible to do it over UDP at the application layer (like HTTP/3 does) - Adds needed reliability at application layer - Congestion control at application layer UDP: Transport Layer Actions - Everything begins when an application passes an application layer message down to UDP - UDP forms a UDP segment by filling out certain set of header field values, including message from above (in SNMP, as payload in UDP segment) - Creates UDP segment and passes it down to IP - IP forwards that IP datagram on to the receiving IP host - On receiving side: UDP receiver receives segment from network layer below - Perform check of UDP checksum - Extract application layer message - Demultplexes message to appropriate application layer socket UDP segment header - Header has four fields: source port # field and dest port # field for multiplexing and demultiplexing - Length field; used because payload part of UDP segment can be a variable length; UDP needs to know exactly how long that UDP segment is - Checksum field UDP Checksum - Detect errors (flipped bits) in transmitted segment between sender and receiver - Example: if we send two numbers but in addition, we send the sum of the two numbers - If we receive 3 numbers, any of those 3 could be changed - We take the first number and second number received and calculate the received checksum to see if it matches the checksum that was sent - If they are different, there is a problem - UDP sender and receiver operate the same way - Sender: sender treats contents of UDP segment (includes UDP header fields and source and destination IP addresses of datagram) as sequence of 16-bit integers, adds them together, and takes the 1’s complement sum - Computes checksum, puts value in UDP checksum field then drops segment down to IP - Receiver: computes checksum of received segment (inlucdes IP addresses and header), checks whether computed checksum field equals the checksum field value placed by the sender - Not equal: there is an error - Equal: there is no error, but there are errors otherwise Internet Checksum: An Example - If there is a carry over, we wrap it around and add it back & add it back to the 16-bit sum to get the final sum - Take 1’s complement to get the checksum - What kind of protection does the Internet checksum actually provide against flipped bits? - Imagine if first number and second number are actually flipped during transmission - Two incorrect numbers will compute a checksum that is the same as the previous example - Errors go undetected! 3.4 - Principles of Reliable Data Transfer - Sending process wants to send data to the receiving process through a reliable channel that is unidirectional (sender to receiver) - Implementation is in form of a transport-layer protocol - Sender side of reliable data transfer protocol - Receiver side of reliable data transfer protocol - However, the channel is unreliable - Message exchanges between protocol entities are bidirectional - Sender side of transport protocol send things to receiver side and receiver side of transport protocol will reply back and send things to sender side of reliable transfer protocol - Complexity of receiver and sender side depends on characteristics of unreliable channel (reordered data, lose data, corrupted data) - Easy for us to look at sender and receiver to see what’s happening, but think of it in the perspective of the sender - How does the sender know whether or not it’s transmitted message over that unreliable channel got through - Only happens receiver signals back they received the message - One side does not know what’s going on at the other side or what’s going on in the channel; like a curtain - Only know what’s going on through sending and receiving of messages Reliable Data Transfer Protocol (rdt): interfaces - On sending side: - data passed down from application layer process to transport layer - Transport layer adds a header together w/ data to create transport layer segment - Send over unreliable channel to receiver - Receiving side: - Segment has header and data component - Receiving side delivers data up to receiving process at the application layer in such a way that every piece of data sent down from sending side is delivered e

Use Quizgecko on...
Browser
Browser