HTTP Connections: Two Types (PDF)
Document Details
Uploaded by ExcitingRhodonite3899
null
Tags
Summary
This document provides an overview of HTTP connections and their types. It discusses non-persistent and persistent HTTP, using examples to illustrate the concepts. Essential topics such as response time, requests, and more are also outlined.
Full Transcript
HTTP connections: two types Non-persistent HTTP 1. TCP connection opened 2. at most one object sent over TCP connection 3. TCP connection closed downloading multiple objects required multiple connections Persistent HTTP TCP connection opened to a server multiple objects can be sent over single T...
HTTP connections: two types Non-persistent HTTP 1. TCP connection opened 2. at most one object sent over TCP connection 3. TCP connection closed downloading multiple objects required multiple connections Persistent HTTP TCP connection opened to a server multiple objects can be sent over single TCP connection between client, and that server TCP connection closed Application Layer: 2-21 Non-persistent HTTP: example User enters URL: www.someSchool.edu/someDepartment/home.index (containing text, references to 10 jpeg images) 1a. HTTP client initiates TCP connection to HTTP server (process) at www.someSchool.edu on port 80 2. HTTP client sends HTTP request message (containing URL) into TCP connection socket. Message indicates time that client wants object someDepartment/home.index 1b. HTTP server at host www.someSchool.edu waiting for TCP connection at port 80 “accepts” connection, notifying client 3. HTTP server receives request message, forms response message containing requested object, and sends message into its socket Application Layer: 2-22 Non-persistent HTTP: example (cont.) User enters URL: www.someSchool.edu/someDepartment/home.index (containing text, references to 10 jpeg images) 5. HTTP client receives response 4. HTTP server closes TCP connection. message containing html file, displays html. Parsing html file, finds 10 referenced jpeg objects 6. Steps 1-5 repeated for each of 10 jpeg objects time Application Layer: 2-23 Non-persistent HTTP: response time RTT (definition): time for a small packet to travel from client to server and back HTTP response time (per object): one RTT to initiate TCP connection one RTT for HTTP request and first few bytes of HTTP response to return obect/file transmission time initiate TCP connection RTT request file time to transmit file RTT file received time time Non-persistent HTTP response time = 2RTT+ file transmission time Application Layer: 2-24 Persistent HTTP (HTTP 1.1) Non-persistent HTTP issues: requires 2 RTTs per object OS overhead for each TCP connection browsers often open multiple parallel TCP connections to fetch referenced objects in parallel Persistent HTTP (HTTP1.1): server leaves connection open after sending response subsequent HTTP messages between same client/server sent over open connection client sends requests as soon as it encounters a referenced object as little as one RTT for all the referenced objects (cutting response time in half) Application Layer: 2-25 HTTP request message two types of HTTP messages: request, response HTTP request message: • ASCII (human-readable format) request line (GET, POST, HEAD commands) header lines carriage return, line feed at start of line indicates end of header lines carriage return character line-feed character GET /index.html HTTP/1.1\r\n Host: www-net.cs.umass.edu\r\n User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:80.0) Gecko/20100101 Firefox/80.0 \r\n Accept: text/html,application/xhtml+xml\r\n Accept-Language: en-us,en;q=0.5\r\n Accept-Encoding: gzip,deflate\r\n Connection: keep-alive\r\n \r\n * Check out the online interactive exercises for more examples: http://gaia.cs.umass.edu/kurose_ross/interactive/ Application Layer: 2-26 HTTP request message: general format method sp URL header field name sp value version cr lf header field name cr value cr lf request line header lines ~ ~ ~ ~ ~ ~ cr lf lf entity body ~ ~ body Application Layer: 2-27 Other HTTP request messages POST method: HEAD method: web page often includes form input user input sent from client to server in entity body of HTTP POST request message requests headers (only) that would be returned if specified URL were requested with an HTTP GET method. PUT method: GET method (for sending data to server): include user data in URL field of HTTP GET request message (following a ‘?’): www.somesite.com/animalsearch?monkeys&banana uploads new file (object) to server completely replaces file that exists at specified URL with content in entity body of POST HTTP request message Application Layer: 2-28 HTTP response message status line (protocol status code status phrase) header lines data, e.g., requested HTML file HTTP/1.1 200 OK Date: Tue, 08 Sep 2020 00:53:20 GMT Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips PHP/7.4.9 mod_perl/2.0.11 Perl/v5.16.3 Last-Modified: Tue, 01 Mar 2016 18:57:50 GMT ETag: "a5b-52d015789ee9e" Accept-Ranges: bytes Content-Length: 2651 Content-Type: text/html; charset=UTF-8 \r\n data data data data data ... * Check out the online interactive exercises for more examples: http://gaia.cs.umass.edu/kurose_ross/interactive/ Application Layer: 2-29 HTTP response status codes status code appears in 1st line in server-to-client response message. some sample codes: 200 OK • request succeeded, requested object later in this message 301 Moved Permanently • requested object moved, new location specified later in this message (in Location: field) 400 Bad Request • request msg not understood by server 404 Not Found • requested document not found on this server 505 HTTP Version Not Supported Application Layer: 2-30 Maintaining user/server state: cookies Web sites and client browser use cookies to maintain some state between transactions four components: 1) cookie header line of HTTP response message 2) cookie header line in next HTTP request message 3) cookie file kept on user’s host, managed by user’s browser 4) back-end database at Web site Example: Susan uses browser on laptop, visits specific e-commerce site for first time when initial HTTP requests arrives at site, site creates: • unique ID (aka “cookie”) • entry in backend database for ID • subsequent HTTP requests from Susan to this site will contain cookie ID value, allowing site to “identify” Susan Application Layer: 2-33 Maintaining user/server state: cookies client ebay 8734 server usual HTTP request msg cookie file usual HTTP response set-cookie: 1678 ebay 8734 amazon 1678 Amazon server creates ID 1678 for user usual HTTP request msg cookiespecific action cookie: 1678 usual HTTP response msg one week later: access cookiespecific action cookie: 1678 usual HTTP response msg time backend database access usual HTTP request msg ebay 8734 amazon 1678 create entry time Application Layer: 2-34 HTTP cookies: comments What cookies can be used for: authorization shopping carts recommendations user session state (Web e-mail) Challenge: How to keep state? at protocol endpoints: maintain state at sender/receiver over multiple transactions in messages: cookies inHTTP messages carry state aside cookies and privacy: cookies permit sites to learn a lot about you on their site. third party persistent cookies (tracking cookies) allow common identity (cookie value) to be tracked across multiple web sites Application Layer: 2-35 Web caches Goal: satisfy client requests without involving origin server user configures browser to point to a (local) Web cache browser sends all HTTP requests to cache • if object in cache: cache returns object to client • else cache requests object from origin server, caches received object, then returns object to client client Web cache origin server client Application Layer: 2-36 Web caches (aka proxy servers) Web cache acts as both client and server • server for original requesting client • client to origin server server tells cache about object’s allowable caching in response header: Why Web caching? reduce response time for client request • cache is closer to client reduce traffic on an institution’s access link Internet is dense with caches • enables “poor” content providers to more effectively deliver content Application Layer: 2-37 Caching example Scenario: access link rate: 1.54 Mbps RTT from institutional router to server: 2 sec web object size: 100K bits average request rate from browsers to origin servers: 15/sec avg data rate to browsers: 1.50 Mbps Performance: problem: large access link utilization = .97 queueing delays LAN utilization: .0015 at high utilization! end-end delay = Internet delay + access link delay + LAN delay = 2 sec + minutes + usecs origin servers public Internet 1.54 Mbps access link institutional network 1 Gbps LAN Application Layer: 2-38 Option 1: buy a faster access link Scenario: 154 Mbps access link rate: 1.54 Mbps RTT from institutional router to server: 2 sec web object size: 100K bits average request rate from browsers to origin servers: 15/sec avg data rate to browsers: 1.50 Mbps Performance: .0097 access link utilization = .97 LAN utilization: .0015 end-end delay = Internet delay + access link delay + LAN delay = 2 sec + minutes + usecs msecs Cost: faster access link (expensive!) public Internet origin servers 154 Mbps 1.54 Mbps access link institutional network 1 Gbps LAN Application Layer: 2-39 Option 2: install a web cache Scenario: access link rate: 1.54 Mbps RTT from institutional router to server: 2 sec web object size: 100K bits average request rate from browsers to origin servers: 15/sec avg data rate to browsers: 1.50 Mbps Cost: web cache (cheap!) Performance: LAN utilization: .? How to compute link access link utilization = ? utilization, delay? average end-end delay = ? origin servers public Internet 1.54 Mbps access link institutional network 1 Gbps LAN local web cache Application Layer: 2-40 Calculating access link utilization, end-end delay with cache: suppose cache hit rate is 0.4: 40% requests served by cache, with low (msec) delay 60% requests satisfied at origin • rate to browsers over access link = 0.6 * 1.50 Mbps = .9 Mbps • access link utilization = 0.9/1.54 = .58 means low (msec) queueing delay at access link average end-end delay: = 0.6 * (delay from origin servers) + 0.4 * (delay when satisfied at cache) = 0.6 (2.01) + 0.4 (~msecs) = ~ 1.2 secs origin servers public Internet 1.54 Mbps access link institutional network 1 Gbps LAN local web cache lower average end-end delay than with 154 Mbps link (and cheaper too!) Application Layer: 2-41