Full Transcript

1. What Is the Internet? - Milioni di dispositivi collegati: host = sistema terminale - Applicazioni di rete - Collegamenti: Rame, fibra ottica, onde elettromagnetiche, satellite Frequenza di trasmissione = ampiezza di banda - Router: instrada i pacchetti...

1. What Is the Internet? - Milioni di dispositivi collegati: host = sistema terminale - Applicazioni di rete - Collegamenti: Rame, fibra ottica, onde elettromagnetiche, satellite Frequenza di trasmissione = ampiezza di banda - Router: instrada i pacchetti verso la loro destinazione finale - ISP: Internet Service Provider - Un protocollo che definisce il formato e l’ordine dei messaggi scambiati fra due o più entità in comunicazione es.: TCP, IP, HTTP, Skype, Ethernet - Internet: “rete delle reti” Struttura gerarchica Internet pubblica e intranet private Standard Internet RFC: Request for comments IETF: Internet Engineering Task Force - Infrastruttura di comunicazione per applicazioni distribuite: Web, VoIP, e-mail, giochi, database e-commerce, condivisione di file, streaming, virtual/augmented reality… Smartphone apps: messaggistica istantanea, meteo, traffico su strada, cloud music, video streaming… - Servizi forniti alle applicazioni: Servizio affidabile dalla sorgente alla destinazione Servizio “best effort” (non affidabile) senza connessione A Nuts-and-Bolts Description - we can describe the nuts and bolts of the Internet, that is, the basic hardware and software components that make up the Internet - The Internet is a computer network that interconnects billions of computing devices throughout the world, these computing devices , so-called servers that store and transmit information such as Web pages and e-mail messages. Increasingly, however, nontraditional Internet “things” such as laptops,gaming consoles, thermostats.. - All of these devices are called hosts or end-systems we can describe the Internet in terms of a networking infrastructure that provides services to distributed applications: - End systems are connected together by a network of communication links and packet switches. - Different links can transmit data at different rates, with the transmission rate of a link measured in bits/second - When one end system has data to send to another end system, the sending end system segments the data and adds header bytes to each segment. The resulting packages of information are then sent through the network to the destination end system, where they are reassembled into the original data. A packet switch takes a packet arriving on one of its incoming communication links and forwards that packet on one of its outgoing communication links ( routers used in the network core, and link-layer switches used in access networks). - The sequence of communication links and packet switches traversed by a packet from the sending end system to the receiving end system is known as a route or path through the network. es: in many ways, packets are analogous to trucks, communication links are analogous to highways and roads, packet switches are analogous to intersections, and end systems are analogous to buildings. Just as a truck takes a path through the transportation network, a packet takes a path through a computer network. - End systems access the Internet through Internet Service Providers (ISPs), including residential ISPs such as local cable or telephone companies - ISPs provide a variety of types of network access to the end systems, including residential broadband access such as cable modem or DSL, high-speed local area network access, and mobile wireless access. - ISPs also provide Internet access to content providers - The Internet is all about connecting end systems to each other, so the ISPs that provide access to end systems must also be interconnected. These lower-tier ISPs are interconnected through national and international upper-tier ISPs such as Level 3 Communications a services description - describe the Internet as an infrastructure that provides services to applications. - The applications are said to be distributed applications, since they involve multiple end systems that exchange data with each other - Internet applications run on end systems— they do not run in the packet switches in the network core. - End systems attached to the Internet provide a socket interface that specifies how a program running on one end system asks the Internet infrastructure to deliver data to a specific destination program running on another end system. the Internet has a socket interface that the program sending data must follow to have the Internet deliver the data to the program that will receive the data. What Is a Protocol? Un protocollo definisce il formato e l’ordine dei messaggi scambiati tra due o più entità in comunicazione, così come le azioni intraprese in fase di trasmissione e/o ricezione di un messaggio o di un altro evento End systems, packet switches, and other pieces of the Internet run protocols that control the sending and receiving of information within the Internet. The Transmission Control Protocol (TCP) and the Internet Protocol (IP) are two of the most important protocols in the Internet. The IP protocol specifies the format of the packets that are sent and received among routers and end systems. The Internet’s principal protocols are collectively known as TCP/IP. We’ll begin looking into protocols in this introductory chapter. - it’s important that everyone agree on what each and every protocol does, so that people can create systems and products that interoperate - Internet standards are developed by the Internet Engineering Task Force (IETF). The IETF standards documents are called requests for comments (RFCs). RFCs started out as general requests for comments (hence the name) to resolve network and protocol design problems that faced the precursor to the Internet. There are specific messages we send, and specific actions we take in response to the received reply messages or other events (such as no reply within some given amount of time)- it takes two (or more) communicating entities running the same protocol in order to accomplish a task. Network Portocols All activity in the Internet that involves two or more communicating remote entities is governed by a protocol A protocol defines the format and the order of messages exchanged between two or more communicating entities, as well as the actions taken on the transmission and/or receipt of a message or other event. The Network Edge - Sistemi terminali (host): Fanno girare programmi applicativi Es.: Web, e-mail Situati all’estremità di Internet - Architettura client/server L’host client richiede e riceve un servizio da un programma server in esecuzione su un altro terminale Es.: browser/server Web; client/server e-mail D: Dove si trovano i server? - Architettura peer to peer Uso limitato (o inesistente) di server dedicati Es.: Skype, Bit Torrent - Reti d’accesso e mezzi fisici Reti di accesso residenziale (punto - punto) Modem dial-up - Fino a 56 kbit/s e accesso diretto al router (ma spesso è inferiore) - Non è possibile “navigare” e telefonare nello stesso momento DSL: digital subscriber line v Installazione: - In genere da parte di una società telefonica - Fino a 1-5 Mbit/s in upstream v Fino a 10-50 Mbit/s in downstream v Linea dedicata Fiber-to-the-home (FTTH) Reti di accesso aziendale (università, istituzioni, aziende) - Una Local Area Network, o LAN, collega i sistemi terminali di aziende e università all’edge router - Ethernet: 10 Mbit/s, 100 Mbit/s, 1 Gbit/s, 10 Gbit/s Moderna configurazione: sistemi terminali collegati mediante uno switch Ethernet Reti di accesso mobile - Accesso Wireless Una rete condivisa d’accesso wireless collega i sistemi terminali al router - Attraverso la stazione base, detta anche “access point” LAN wireless: v 802.11a/b/g/n/ac (WiFi): 11, 54, 100+Mbit/s Rete d’accesso wireless geografica - Gestita da un provider di telecomunicazioni - WiMax? - Da qualche Mbit/s per i sistemi cellulari 3G (UMTS, HSDPA) ad 1 Gbit/s (?!?) - Oggi: 4G / 5G - Reti domestiche Componenti di una tipica rete da abitazione: DSL o modem via cavo Router/firewall/NAT Ethernet Punto d’accesso wireless - Mezzi di trasmissione e definizioni: Bit: unità di informazione base 0/1, viaggia da un sistema terminale a un altro, passando per una serie di coppie trasmittente-ricevente Mezzo fisico: strumento che permette il passaggio di bit tra trasmittente e il ricevente Mezzi guidati: - i segnali si propagano in un mezzo fisico: fibra ottica, filo di rame o cavo coassiale Mezzi a onda libera: - i segnali si propagano nell’atmosfera, o nello spazio esterno Doppino intrecciato (TP) - Due fili di rame avvolti Tipico cavo usato per lo standard Ethernet - Diversi tipi di schermatura Nessuna (unshielded) Schermatura per doppino (shielded) v Lamina o maglia che avvolge l’intero cavo (foiled) Entrambe (screened) Denominazione sintetica dei cavi - X / Y TP - X è la schermatura dell’intero cavo ❖ U : unshielded ❖ F : foiled (di solito, una lamina di alluminio) ❖ S : maglia metallica intrecciata (di solito, rame placcato alluminio) ❖ SF : entrambe - Y è la schermatura di ogni doppino ❖ U : unshielded ❖ F : shielded Mezzi trasmissivi: cavo coassiale e fibra ottica Cavo coassiale: - due conduttori in rame concentrici - bidirezionale - banda base: ❖ singolo canale sul cavo ❖ legacy Ethernet - banda larga: ❖ più canali sul cavo ❖ HFC Fibra ottica: - Mezzo sottile e flessibile che conduce impulsi di luce (ciascun impulso rappresenta un bit) - Alta frequenza trasmissiva: ❖ Elevata velocità di trasmissione punto-punto (da 10 a 100 Gbit/s) - Basso tasso di errore, ripetitori distanziati, immune all’interferenza elettromagnetica - Mezzo preferito per collegamenti “long-haul” Mezzi trasmissivi: canali radio - Trasportano segnali nello spettro elettromagnetico - Non richiedono l’installazione fisica di cavi - Bidirezionali - Effetti dell’ambiente di propagazione: ❖ Riflessione ❖ Ostruzione da parte di ostacoli ❖ Interferenza Tipi di canale radio - Microonde terrestri ❖ Es.: canali fino a 45 Mbit/s q LAN (es.: WiFi) ❖ 11 Mbit/s, 54 Mbit/s, 100+ Mbit/s - Wide-area (es.: cellulari) ❖ 3G: ~1 Mbit/s, HSDPA: ~14.4 Mbit/s ❖ LTE: 100 Mbit/s v 5G: ? - Satellitari ❖ Canali fino a 45 Mbit/s (o sottomultipli) ❖ Ritardo punto-punto di > 200 ms ❖ Geostazionari (GEO) a 36.000 km ❖ Low-earth orbit (LEO) a bassa quota host = end system. Hosts are sometimes further divided into two categories: clients and servers Access Networks - the network that physically connects an end system to the first router on a path from the end system to any other distant end system Home Access (DSL, Cable, FTTH, Dial-Up, and Satellite) - Today, the two most prevalent types of broadband residential access are digital subscriber line (DSL) and cable. - when DSL is used, a customer’s telco is also its ISP. Each customer’s DSL modem uses the existing telephone line to exchange data with a digital subscriber line access multiplexer (DSLAM) located in the telco’s local central office (CO). The home’s DSL modem takes digital data and translates it to high-frequency tones for transmission over telephone wires to the CO; the analog signals from many such houses are translated back into digital format at the DSLAM. The residential telephone line carries both data and traditional telephone signals simultaneously, which are encoded at different frequencies: - A high-speed downstream channel, in the 50 kHz to 1 MHz band - A medium-speed upstream channel, in the 4 kHz to 50 kHz band - An ordinary two-way telephone channel, in the 0 to 4 kHz band This approach makes the single DSL link appear as if there were three separate links, so that a telephone call and an Internet connection can share the DSL link at the same time. The DSL standards define multiple transmission rates, including 12 Mbps downstream and 1.8 Mbps upstream, and 55 Mbps downstream and 15 Mbps upstream. Because the downstream and upstream rates are different, the access is said to be asymmetric. The maximum rate is also limited by the distance between the home and the CO, the gauge of the twisted-pair line and the degree of electrical interference While DSL makes use of the telco’s existing local telephone infrastructure, cable Internet access makes use of the cable television company’s existing cable television infrastructure ( A residence obtains cable Internet access from the same company that provides its cable television) One important characteristic of cable Internet access is that it is a shared broadcast medium. In particular, every packet sent by the head end travels downstream on every link to every home and every packet sent by a home travels on the upstream channel to the head end. For this reason, if several users are simultaneously downloading a video file e on the downstream channel, the actual rate at which each user receives its video file will be significantly lower than the aggregate cable downstream rate. Because the upstream channel is also shared, a distributed multiple access protocol is needed to coordinate transmissions and avoid collisions. An up-and-coming technology that provides even higher speeds is fiber to the home (FTTH) as the name suggests, the FTTH concept is simple—provide an optical fiber path from the CO directly to the home. The simplest optical distribution network is called direct fiber, with one fiber leaving the CO for each home. More commonly, each fiber leaving the central office is actually shared by many homes; it is not until the fiber gets relatively close to the homes that it is split into individual customer-specific fibers. There are two competing optical-distribution network architectures that perform this splitting: active optical networks (AONs) and passive optical networks (PONs). AON is essentially switched Ethernet PON: - Each home has an optical network terminator (ONT), which is connected by dedicated optical fiber to a neighborhood splitter(divide i dati in upstream e downstream, inoltre mette insieme il segnale internet con quello telefonico). The splitter combines a number of homes onto a single, shared optical fiber, which connects to an optical line terminator (OLT) in the telco’s CO. The OLT, providing conversion between optical and electrical signals, connects to the Internet via a telco router. In the home, users connect a home router (typically a wireless router) to the ONT and access the Internet via this home router. In the PON architecture, all packets sent from OLT to the splitter are replicated at the splitter. The average downstream speed of US FTTH customers was approximately 20 Mbps in 2011. Two other access network technologies are also used to provide Internet access to the home. In locations where DSL, cable, and FTTH are not available (e.g., in some rural settings), a satellite link can be used to connect a residence to the Internet at speeds of more than 1 Mbps; StarBand and HughesNet are two such satellite access providers. Dial-up access over traditional phone lines is based on the same model as DSL—a home modem connects over a phone line to a modem in the ISP. - Access in the Enterprise (and the Home): Ethernet and WiFi a local area network (LAN) is used to connect an end system to the edge router. Ethernet is by far the most prevalent access technology, Ethernet users use twisted-pair copper wire to connect to an Ethernet switch. The Ethernet switch, or a network of such interconnected switches, is then in turn connected into the larger Internet. With Ethernet access, users typically have 100 Mbps or 1 Gbps access to the Ethernet switch, whereas servers may have 1 Gbps or even 10 Gbps access. In a wireless LAN setting, wireless users transmit/receive packets to/from an access point that is connected into the enterprise’s network (most likely using wired Ethernet), which in turn is connected to the wired Internet. A wireless LAN user must typically be within a few tens of meters of the access point. Wireless LAN access based on IEEE 802.11 technology, more colloquially known as WiFi. Even though Ethernet and WiFi access networks were initially deployed in enterprise (corporate, university) settings, they have recently become relatively common components of home networks. Many homes combine broadband residential access (that is, cable modems or DSL) with these inexpensive wireless LAN technologies to create powerful home networks - Wide-Area Wireless Access: 3G and LTE iPhonE and Android devices employ the same wireless infrastructure used for cellular telephony to send/receive packets through a base station that is operated by the cellular network provider. Unlike WiFi, a user need only be within a few tens of kilometers of the base station. A third-generation (3G) wireless, which provides packet-switched wide-area wireless Internet access at speeds in excess of 1 Mbps. But even higher-speed wide-area access technologies—a fourth-generation (4G) of wide-area wireless networks—are already being deployed. LTE (for “Long-Term Evolution”) has its roots in 3G technology, and can achieve rates in excess of 10 Mbps. Physical Media Thus our bit, when traveling from source to destination, passes through a series of transmitter-receiver pairs. For each transmitter receiver pair, the bit is sent by propagating electromagnetic waves or optical pulses across a physical medium. The physical medium can take many shapes and forms and does not have to be of the same type for each transmitter-receiver pair along the path. Examples of physical media include twisted-pair copper wire, coaxial cable, multimode fiber-optic cable, terrestrial radio spectrum, and satellite radio spectrum. Physical media fall into two categories: guided media and unguided media. - With guided media, the waves are guided along a solid medium, such as a fiber-optic cable, a twisted-pair copper wire, or a coaxial cable. - With unguided media, the waves propagate in the atmosphere and in outer space, such as in a wireless LAN or a digital satellite channel. Twisted-Pair Copper Wire Twisted pair ( least expensive and most commonly used guided transmission medium) consists of two insulated copper wires, each about 1 mm thick, arranged in a regular spiral pattern. The wires are twisted together to reduce the electrical interference from similar pairs close by. Typically, a number of pairs are bundled together in a cable by wrapping the pairs in a protective shield. A wire pair constitutes a single communication link. Unshielded twisted pair (UTP) is commonly used for computer networks within a building, that is, for LANs. Data rates for LANs using twisted pair today range from 10 Mbps to 10 Gbps. The data rates that can be achieved depend on the thickness of the wire and the distance between transmitter and receiver. Coaxial Cable coaxial cable consists of two copper conductors, but the two conductors are concentric rather than parallel. With this construction and special insulation and shielding, coaxial cable can achieve high data transmission rates. Coaxial cable is quite common in cable television systems. Coaxial cable can be used as a guided shared medium. Specifically, a number of end systems can be connected directly to the cable, with each of the end systems receiving whatever is sent by the other end systems. Fiber Optics An optical fiber is a thin, flexible medium that conducts pulses of light, with each pulse representing a bit. A single optical fiber can support tremendous bit rates, up to tens or even hundreds of gigabits per second. They are immune to electromagnetic interference, have very low signal attenuation up to 100 kilometers, and are very hard to tap. (no dispersione) These characteristics have made fiber optics the preferred longhaul guided transmission media, particularly for overseas links. the high cost of optical devices—such as transmitters, receivers, and switches—has hindered (ha ostacolato) their deployment for short-haul transport, such as in a LAN or into the home in a residential access network. The Optical Carrier (OC) standard link speeds range from 51.8 Mbps to 39.8 Gbps; these specifications are often referred to as OC-n, where the link speed equals n ∞ 51.8 Mbps Terrestrial Radio Channels Radio channels carry signals in the electromagnetic spectrum. They are an attractive medium because they require no physical wire to be installed, can penetrate walls, provide connectivity to a mobile user, and can potentially carry a signal for long distances. Environmental considerations determine path loss and shadow fading (which decrease the signal strength as the signal travels over a distance and around/through obstructing objects), multipath fading (due to signal reflection off of interfering objects), and interference (due to other transmissions and electromagnetic signals). Terrestrial radio channels can be broadly classified into three groups: those that operate over very short distance (e.g., with one or two meters); those that operate in local areas, typically spanning from ten to a few hundred meters; and those that operate in the wide area, spanning tens of kilometers. Satellite Radio Channels A communication satellite links two or more Earth-based microwave transmitter/ receivers, known as ground stations. The satellite receives transmissions on one frequency band, regenerates the signal using a repeater (discussed below), and transmits the signal on another frequency. Two types of satellites are used in communications: geostationary satellites and low-earth orbiting (LEO) satellites - Geostationary satellites permanently remain above the same spot on Earth. This stationary presence is achieved by placing the satellite in orbit at 36,000 kilometers above Earth’s surface. This huge distance from ground station through satellite back to ground station introduces a substantial signal propagation delay of 280 milliseconds - LEO satellites are placed much closer to Earth and do not remain permanently above one spot on Earth. They rotate around Earth (just as the Moon does) and may communicate with each other, as well as with ground stations. To provide continuous coverage to an area, many satellites need to be placed in orbit. The Network Core il nucleo della rete: - Rete magliata di router che interconnettono i sistemi terminali - Il quesito fondamentale : come vengono trasferiti i dati attraverso la rete ? - Commutazione di circuito : circuito dedicato per l ’intera durata della sessione (rete telefonica ) - Commutazione di pacchetto : i messaggi di una sessione utilizzano le risorse su richiesta, e di conseguenza potrebbero dover attendere per accedere a un collegamento commutazione di circuito - Risorse punto-punto riservate alla comunicazione - Ampiezza di banda, capacità del commutatore - Risorse dedicate: non c’è condivisione - Prestazioni garantite - Necessaria l’impostazione della connessione Esempio tipico: chiamata Usato anche per traffico dati - Risorse di rete (ad es., ampiezza di banda, bandwidth) suddivise in “porzioni” - Ciascuna “porzione” viene allocata ai vari collegamenti - le risorse rimangono inattive se non utilizzate (non c’è condivisione) - Suddivisione della banda in “porzioni” - Ripartizione della bit rate - Divisione di frequenza - Divisione di tempo COMMUTAZIONE DI CIRCUITO - CIRCUIT SWITCHING - FDM (Frequency division multiplexing) certa frequenza consecutiva - radio - più flussi trasmessi nello stesso momento - TDM (Time division multiplexer) -assegno una porzione di tempo periodicamente alla stessa frequenza (ogni utente 1/n canale) - trasmetto ad intervalli, chi riceve ha un flusso continuo ES: Quanto tempo occorre per inviare un file di 640.000 bit dall’host A all’host B su una rete a commutazione di circuito? - Tutti i collegamenti presentano un bit rate di 1,536 Mbit/s - Ciascun collegamento utilizza TDM con 24 slot/secondo - Si impiegano 500 ms per stabilire un circuito punto-punto Ttot=500ms (attesa x setup connessione) + Ttrasm Ttrasm= L (640.000 bit) / R (frequenza in bit x connessione) R = Ctot (1,536 Mbit/s) / 24 = 0,064 Mbps = 64kbps Ttrasm= 640.000bit/64kbps = 10sec Ttot=500ms+10sec=10,5 sec. - Il flusso di dati punto-punto viene suddiviso in pacchetti - I pacchetti degli utenti A e B condividono le risorse di rete - Ciascun pacchetto utilizza completamente le risorse fisiche - Le risorse vengono ripartite in modalità best effort ed a seconda delle necessità (on demand) - Contesa per le risorse - La richiesta di risorse può eccedere il quantitativo disponibile - Store and forward: Chi inoltra un pacchetto deve riceverlo per intero prima di cominciare a trasmettere sul collegamento in uscita - Congestione: accodamento dei pacchetti, attesa per l’utilizzo del collegamento - (dinamica) quando c’è più richiesta è lento, altrimenti veloce, ogni pacchetto ha assegnata tutta la capacità del canale - best efford - si adatta meglio quando il traffico di dati non è continuo (permette al traffico di mescolarsi) Confronto tra commutazione di pacchetto e commutazione di circuito commutazione a pacchetto - Ottima per i dati a raffica (“burst”) Condivisione delle risorse Più semplice, non necessita scambio di messaggi per riservare risorse - Se si verificano congestioni: ritardo e perdita di pacchetti Sono necessari protocolli per il trasferimento affidabile dei dati e per prevenire o controllare la congestione - D: Come ottenere un comportamento circuit-like? È necessario fornire garanzie di larghezza di banda per le applicazioni audio/video È ancora un problema irrisolto Struttura di Internet: la rete delle reti - the mesh of packet switches and links that interconnects the Internet’s end systems. (nodi della rete riservano le risorse sia per le chiamate in atto sia per quelle future) - suddivisione della banda in porzioni (divisione tempo e frequenza o ripartizione dei bit di rete. Packet Switching To send a message from a source end system to a destination end system, the source breaks long messages into smaller chunks of data known as packets. Between source and destination, each packet travels through communication links and packet switches (for which there are two predominant types, routers and link-layer switches). Packets are transmitted over each communication link at a rate equal to the full transmission rate of the link. If a source end system or a packet switch is sending a packet of L bits over a link with transmission rate R bits/sec, then the time to transmit the packet is L /R seconds. - Store-and-Forward Transmission Store-and-forward transmission means that the packet switch must receive the entire packet before it can begin to transmit the first bit of the packet onto the outbound link. es: Router (A router will typically have many incident links, since its job is to switch an incoming packet onto an outgoing link) - Only after the router has received all of the packet’s bits can it begin to transmit (i.e., “forward”) the packet onto the outbound link. to calculate the amount of time that elapses from when the source begins to send the packet until the destination has received the entire packet. The source begins to transmit at time 0; at time L/R seconds, the source has transmitted the entire packet, and the entire packet has been received and stored at the router. At time L/R seconds, since the router has just received the entire packet, it can begin to transmit the packet onto the outbound link towards the destination; at time 2L/R, the router has transmitted the entire packet, and the entire packet has been received by the destination. Thus, the total delay is 2L/R. The entire packet has been received by the destination. Thus, the total delay is 2L/R. If the switch instead forwarded bits as soon as they arrive (without first receiving the entire packet), then the total delay would be L/R since bits are not held up at the router. Let’s now consider the general case of sending one packet from source to destination over a path consisting of N links each of rate R (thus, there are N-1 routers between source and destination). Applying the same logic as above, we see that the end-to-end delay is: dend-to-end = NL/R - Queuing Delays and Packet Loss Each packet switch has multiple links attached to it. For each attached link, the packet switch has an output buffer (also called an output queue), which stores packets that the router is about to send into that link. The output buffers play a key role in packet switching. If an arriving packet needs to be transmitted onto a link but finds the link busy with the transmission of another packet, the arriving packet must wait in the output buffer. Thus, in addition to the store-and-forward delays, packets suffer output buffer queuing delays. These delays are variable and depend on the level of congestion in the network. - Forwarding Tables and Routing Protocols In the Internet, every end system has an address called an IP address. When a source end system wants to send a packet to a destination end system, the source includes the destination’s IP address in the packet’s header. More specifically, each router has a forwarding table that maps destination addresses (or portions of the destination addresses) to that router’s outbound links. When a packet arrives at a router, the router examines the address and searches its forwarding table, using this destination address, to find the appropriate outbound link. The router then directs the packet to this outbound link. es: The end-to-end routing process is analogous to a car driver who does not use maps but instead prefers to ask for directions. We just learned that a router uses a packet’s destination address to index a forwarding table and determine the appropriate outbound link. The Internet has a number of special routing protocols that are used to automatically set the forwarding tables. A routing protocol may, for example, determine the shortest path from each router to each destination and use the shortest path results to configure the forwarding tables in the routers - Circuit Switching In circuit-switched networks, the resources needed along a path (buffers, link transmission rate) to provide for communication between the end systems are reserved for the duration of the communication session between the end systems. In packet-switched networks, these resources are not reserved; a session’s messages use the resources on demand and, as a consequence, may have to wait (that is, queue) for access to a communication link. (es: ristorante con o senza reservation) Traditional telephone networks are examples of circuit-switched networks. Consider what happens when one person wants to send information (voice or facsimile) to another over a telephone network. Before the sender can send the information, the network must establish a connection between the sender and the receiver. This is a bona fide connection for which the switches on the path between the sender and receiver maintain connection state for that connection. In the jargon of telephony, this connection is called a circuit. When the network establishes the circuit, it also reserves a constant transmission rate in the network’s links (representing a fraction of each link’s transmission capacity) for the duration of the connection. Since a given transmission rate has been reserved for this sender-toreceiver connection, the sender can transfer the data to the receiver at the guaranteed constant rate In contrast, consider what happens when one host wants to send a packet to another host over a packet-switched network, such as the Internet. As with circuit switching, the packet is transmitted over a series of communication links. But different from circuit switching, the packet is sent into the network without reserving any link resources whatsoever. If one of the links is congested because other packets need to be transmitted over the link at the same time, then the packet will have to wait in a buffer at the sending side of the transmission link and suffer a delay - Multiplexing in Circuit-Switched Networks A circuit in a link is implemented with either frequency-division multiplexing (FDM) or time-division multiplexing (TDM). With FDM, the frequency spectrum of a link is divided up among the connections established across the link. Specifically, the link dedicates a frequency band to each connection for the duration of the connection. In telephone networks, this frequency band typically has a width of 4 kHz (that is, 4,000 hertz or 4,000 cycles per second). The width of the band is called, not surprisingly, the bandwidth. FM radio stations also use FDM to share the frequency spectrum between 88 MHz and 108 MHz, with each station being allocated a specific frequency band. For a TDM link, time is divided into frames of fixed duration, and each frame is divided into a fixed number of time slots. When the network establishes a connection across a link, the network dedicates one time slot in every frame to this connection. These slots are dedicated for the sole use of that connection, with one time slot available for use (in every frame) to transmit the connection’s data Proponents of packet switching have always argued that circuit switching is wasteful because the dedicated circuits are idle during silent periods. Proponents of packet switching also enjoy pointing out that establishing end-to-end circuits and reserving end-to-end transmission capacity is complicated and requires complex signaling software to coordinate the operation of the switches along the end-to-end path. (es numerico) - Packet Switching Versus Circuit Switching Critics of packet switching have often argued that packet switching is not suitable for real-time services (for example, telephone calls and video conference calls) because of its variable and unpredictable end-to-end delays (due primarily to variable and unpredictable queuing delays). Proponents of packet switching argue that (1) it offers better sharing of transmission capacity than circuit switching and (2) it is simpler, more efficient, and less costly to implement than circuit switching. packet switching is more efficient, when a user generates data at a constant rate of 100 kbps, and periods of inactivity, when a user generates no data. Suppose further that a user is active only 10 percent of the time. With circuit switching, 100 kbps must be reserved for each user at all times. For example, with circuit-switched TDM, if a one-second frame is divided into 10 time slots of 100 ms each, then each user would be allocated one time slot per frame. Thus, the circuit-switched link can support only 10 (=1 Mbps/100 kbps) simultaneous users. With packet switching, the probability that a specific user is active is 0.1 (that is, 10 percent). If there are 35 users, the probability that there are 11 or more simultaneously active users is approximately 0.0004. (esempi pag 57) Circuit switching pre-allocates use of the transmission link regardless of demand, with allocated but unneeded link time going unused. Packet switching on the other hand allocates link use on demand. Link transmission capacity will be shared on a packet-by-packet basis only among those users who have packets that need to be transmitted over the link. - A Network of Networks We saw earlier that end systems (PCs, smartphones, Web servers, mail servers, and so on) connect into the Internet via an access ISP. The access ISP can provide either wired or wireless connectivity, using an array of access technologies including DSL, cable, FTTH, Wi-Fi, and cellular. (Note that the access ISP does not have to be a telco or a cable company) But connecting end users and content providers into an access ISP is only a small piece of the systems that make up the Internet, to be completed the access ISPs themselves must be interconnected,this is done by creating a network of networks Over the years, the network of networks that forms the Internet has evolved into a very complex structure. The overarching goal is to interconnect the access ISPs so that all end systems can send packets to each other. One naive approach would be to have each access ISP directly connect with every other access ISP (s it would require each access ISP to have a separate communication link to each of the hundreds of thousands of other access ISPs all over the world). ➔ Network Structure 1 interconnects all of the access ISPs with a single global transit ISP. Our (imaginary) global transit ISP is a network of routers and communication links that not only spans the globe, but also has at least one router near each of the hundreds of thousands of access ISPs. Of course, it would be very costly for the global ISP to build such an extensive network. To be profitable, it would naturally charge each of the access ISPs for connectivity, with the pricing reflecting (but not necessarily directly proportional to) the amount of traffic an access ISP exchanges with the global ISP. Since the access ISP pays the global transit ISP, the access ISP is said to be a customer and the global transit ISP is said to be a provider. ➔ Network Structure 2 which consists of the hundreds of thousands of access ISPs and multiple global transit ISPs. The access ISPs certainly prefer Network Structure 2 over Network Structure 1 since they can now choose among the competing global transit providers as a function of their pricing and services. Note, however, that the global transit ISPs themselves must interconnect: Otherwise access ISPs connected to one of the global transit providers would not be able to communicate with access ISPs connected to the other global transit providers. Network Structure 2, just described, is a two-tier hierarchy with global transit providers residing at the top tier and access ISPs at the bottom tier. es: For example, in China, there are access ISPs in each city, which connect to provincial ISPs, which in turn connect to national ISPs, which finally connect to tier-1 ISPs ➔ Network Structure 3 To build a network that more closely resembles today’s Internet, we must add points of presence (PoPs), multi-homing, peering, and Internet exchange points (IXPs) to the hierarchical Network Structure 3. PoPs exist in all levels of the hierarchy, except for the bottom (access ISP) level. A PoP is simply a group of one or more routers (at the same location) in the provider’s network where customer ISPs can connect into the provider ISP. Any ISP (except for tier-1 ISPs) may choose to multi-home, that is, to connect to two or more provider ISPs ➔ Network Structure 4 the customer ISPs pay their provider ISPs to obtain global Internet interconnectivity. The amount that a customer ISP pays a provider ISP reflects the amount of traffic it exchanges with the provider. To reduce these costs, a pair of nearby ISPs at the same level of the hierarchy can peer, that is, they can directly connect their networks together so that all the traffic between them passes over the direct connection rather than through upstream intermediaries. When two ISPs peer, it is typically settlement-free, that is, neither ISP pays the other. As noted earlier, tier-1 ISPs also peer with one another, settlement-free. For a readable discussion of peering and customer-provider relationships. Along these same lines, a third-party company can create an Internet Exchange Point (IXP), which is a meeting point where multiple ISPs can peer together. An IXP is typically in a stand-alone building with its own switches. There are over 400 IXPs in the Internet today. We refer to this ecosystem consisting of access ISPs, regional ISPs, tier-1 ISPs, PoPs, multi-homing, peering, and IXPs as Network Structure 4. ➔ Network Structure 5 Builds on top of Network Structure 4 by adding content-provider networks. Google is currently one of the leading examples of such a content-provider network. As of this writing, it is estimated that Google has 50–100 data centers distributed across North America, Europe, Asia, South America, and Australia. Some of these data centers house over one hundred thousand servers, while other data centers are smaller, housing only hundreds of servers. The Google data centers are all interconnected via Google’s private TCP/IP network, which spans the entire globe but is nevertheless separate from the public Internet. Importantly, the Google private network only carries traffic to/from Google servers. The Google private network attempts to “bypass” the upper tiers of the Internet by peering (settlement free) with lower-tier ISPs, either by directly connecting with them or by connecting with them at IXPs. However, because many access ISPs can still only be reached by transiting through tier-1 networks, the Google network also connects to tier-1 ISPs, and pays those ISPs for the traffic it exchanges with them. By creating its own network, a content provider not only reduces its payments to upper-tier ISPs, but also has greater control of how its services are ultimately delivered to end users Delay, Loss, and Throughput in Packet-Switched Networks Ritardo di nodo d(node) = de + da + dt + dp + dr - elaborazione (pochi microsecondi) - accodamento (dipende dalla congestione) - trasmissione (L/R) - lunghezza del pacchetto sul canale, significativo sui collegamenti a bassa velocità - propagazione (distanza/velocità di propagazione) ms - ritardo di accodamento: - R frequ. di trasmissione (bit/s) - L lunghezza pacchetto (bit) - A tasso medio arrivo dei pacchetti intensità del traffico = L*A / R =1 → ritardo consistente >1 → il lavoro da fare è maggiore di quello sostenuto Ritardi e percorsi internet: - traceroute: percorso e ritardi che i pacchetti incontrano tra il mittente e il server Perdita di pacchetti - una coda ha capacità finita, una volta arrivata la fine Throughput - frequenza(dati/unità di tempo) alla quale un unità dati viene trasferita tra mittente e ricevente (bit/s). - istantaneo - medio (periodo di tempo più lungo) Livelli di protocollo - diversi componenti di una rete Computer networks necessarily constrain throughput (the amount of data per second that can be transferred) between end systems, introduce delays between end systems, and can actually lose packets. 1.4 Overview of Delay in Packet-Switched Networks As a packet travels from one node (host or router) to the subsequent node (host or router) along this path, the packet suffers from several types of delays at each node along the path The most important of these delays are the nodal processing delay, queuing delay, transmission delay, and propagation delay; together, these delays accumulate to give a total nodal delay. - Types of Delay: A packet can be transmitted on a link only if there is no other packet currently being transmitted on the link and if there are no other packets preceding it in the queue; if the link is currently busy or if there are other packets already queued for the link, the newly arriving packet will then join the queue ❖ Processing Delay The time required to examine the packet’s header and determine where to direct the packet is part of the processing delay. Processing delays in high-speed routers are typically on the order of microseconds or less. ❖ Queuing Delay The packet experiences a queuing delay as it waits to be transmitted onto the link. The length of the queuing delay of a specific packet will depend on the number of earlier-arriving packets that are queued and waiting for transmission onto the link. Queuing delays can be on the order of microseconds to milliseconds in practice. ❖ Transmission Delay Assuming that packets are transmitted in a first-come-first-served manner,Denote the length of the packet by L bits, and denote the transmission rate of the link from router A to router B by R bits/sec. The transmission delay is L/R. This is the amount of time required to push (that is, transmit) all of the packet’s bits into the link. Transmission delays are typically on the order of microseconds to milliseconds in practice. ❖ Propagation Delay The time required to propagate from the beginning of the link to router B is the propagation delay. The bit propagates at the propagation speed of the link (the range of which is equal to, or a little less than, the speed of light). That is, the propagation delay is d/s, where d is the distance between router A and router B and s is the propagation speed of the link - Comparing Transmission and Propagation Delay The transmission delay is the amount of time required for the router to push out the packet; it is a function of the packet’s length and the transmission rate of the link, but has nothing to do with the distance between the two routers. The propagation delay, on the other hand, is the time it takes a bit to propagate from one router to the next; it is a function of the distance between the two routers, but has nothing to do with the packet’s length or the transmission rate of the link. the total nodal delay is given by dnodal=dproc+dqueue+dtrans+dprop - Queuing Delay and Packet Loss The most complicated and interesting component of nodal delay is the queuing delay. Unlike the other three delays (namely, d , d , and d ), the queuing delay can vary from packet to packet. Let a denote the average rate at which packets arrive at the queue (a is in units of packets/sec). Recall that R is the transmission rate (in bits/sec) at which bits are pushed out of the queue. Also suppose that all packets consist of L bits. Then the average rate at which bits arrive at the queue is La (bits/sec). The ratio La/R, called the traffic intensity, often plays an important role in estimating the extent of the queuing delay. - If La/R > 1, then the average rate at which bits arrive at the queue exceeds the rate at which the bits can be transmitted from the queue. The queue will tend to increase without bound and the queuing delay will approach infinity! Therefore, one of the golden rules in traffic engineering is: Design your system so that the traffic intensity is no greater than 1. - In case La/R ≤ 1. The nature of the arriving traffic impacts the queuing delay Typically, the arrival process to a queue is random; that is, the arrivals do not follow any pattern and the packets are spaced apart by random amounts of time. - if the traffic intensity is close to zero, the average queuing delay will be close to zero. - when the traffic intensity is close to 1,, there will be intervals of time when the arrival rate exceeds the transmission capacity and a queue will form during these periods of time; - when the arrival rate is less than the transmission capacity, the length of the queue will shrink Packet Loss The queue capacity is finite, packet delays do not really approach infinity as the traffic intensity approaches 1. Instead a packet can arrive to find a full queue. With no place to store such a packet, a router will drop that packet; that is, the packet will be lost. The fraction of lost packets increases as the traffic intensity increases.. Therefore, performance at a node is often measured not only in terms of delay, but also in terms of the probability of packet loss. - End-to-End Delay (r the total delay from source to destination) The processing delay at each router and at the source host is d , the transmission rate out of each router and out of the source host is R bits/sec, and the propagation on each link is d. The nodal delays accumulate and give an end-to end delay, dend−end=N(dproc+dtrans+dprop) - Traceroute To get a hands-on feel for end-to-end delay in a computer network, we can make use of the Traceroute program. Traceroute is a simple program that can run in any Internet host. When the user specifies a destination hostname, the program in the source host sends multiple, special packets toward that destination In the trace above there are nine routers between the source and the destination. Most of these routers have a name, and all of them have addresses. For example, the name of Router 3 is border4-rt-gi1-3.gw.umass.edu and its address is 128.119.2.194. Looking at the data provided for this same router, we see that in the first of the three trials the round-trip delay between the source and the router was 1.03 msec. The round-trip delays for the subsequent two trials were 0.48 and 0.45 msec. These round-trip delays include all of the delays just discussed, including transmission delays, propagation delays, router processing delays, and queuing delays. Because the queuing delay is varying with time, the round-trip delay of packet n sent to a router n can sometimes be longer than the round-trip delay of packet n+1 sent to router n+1. Indeed, we observe this phenomenon in the above example: the delays to Router 6 are larger than the delays to Router 7! - End System, Application, and Other Delays an end system wanting to transmit a packet into a shared medium (e.g., as in a WiFi or cable modem scenario) may purposefully delay its transmission as part of its protocol for sharing the medium with other end systems; - Throughput in Computer Networks To define throughput, consider transferring a large file from Host A to Host B across a computer network. The instantaneous throughput at any instant of time is the rate (in bits/sec) at which Host B is receiving the file. If the file consists of F bits and the transfer takes T seconds for Host B to receive all F bits, then the average throughput of the file transfer is F/T bits/sec es: Clearly, the server cannot pump bits through its link at a rate faster than R bps; and the router cannot forward bits at a rate faster than R bps. If then the bits pumped by the server will “flow” right through the router and arrive at the client at a rate of R bps, giving a throughput of R bps. If, on the other hand, then the router will not be able to forward bits as quickly as it receives them. In this case, bits will only leave the router at rate R , giving an end-to-end throughput of R. The throughput is min{Rc, Rs }, that is, it is the transmission rate of the bottleneck link. Having determined the throughput, we can now approximate the time it takes to transfer a large file of F bits from server to client as F/min{Rs , Rc } a network with N links between the server and the client, with the transmission rates of the N links being Applying the same analysis as for the two-link network, we find that the throughput for a file transfer from server to client is min{R1,R2,....RN} which is once again the transmission rate of the bottleneck link along the path between server and client. The examples show that throughput depends on the transmission rates of the links over which the data flows. We saw that when there is no other intervening traffic, the throughput can simply be approximated as the minimum transmission rate along the path between source and destination. The 3th example shows that more generally the throughput depends not only on the transmission rates of the links along the path, but also on the intervening traffic.In particular, a link with a high transmission rate may nonetheless be the bottleneck link for a file transfer if many other data flows are also passing through that link. 1.5 Protocol Layers and Their Service Models it is apparent that the Internet is an extremely complicated system, but it is organized in a network architecture. - Layered Architecture has divided the airline functionality into layers, providing a framework in which we can discuss airline travel. Note that each layer, combined with the layers below it, implements some functionality, some service. Each layer provides its service by (1) performing certain actions within that layer (for example, at the gate layer, loading and unloading people from an airplane) and by (2) using the services of the layer directly below it. A layered architecture allows us to discuss a well-defined, specific part of a large and complex system. This simplification itself is of considerable value by providing modularity, making it much easier to change the implementation of the service provided by the layer. As long as the layer provides the same service to the layer above it, and uses the same services from the layer below it, the remainder of the system remains unchanged when a layer’s implementation is changed. For large and complex systems that are constantly being updated, the ability to change the implementation of a service without affecting other components of the system is another important advantage of layering. - Protocol Layering To provide structure to the design of network protocols, network designers organize protocols—and the network hardware and software that implement the protocols—in layers. Each protocol belongs to one of the layers. We are again interested in the services that a layer offers to the layer above—the so-called service model of a layer. each layer provides its service by (1) performing certain actions within that layer and by (2) using the services of the layer directly below it. Application-layer protocols—such as HTTP and SMTP—are almost always implemented in software in the end systems; so are transport-layer protocols. The network layer is often a mixed implementation of hardware and software, a layer of n protocol distributed among the end systems, packet switches, and other components that make up the network. That is, there’s often a piece of a layer n protocol in each of these network components. Protocol layering has conceptual and structural advantages [RFC 3439]. As we have seen, layering provides a structured way to discuss system components. Modularity makes it easier to update system components. One potential drawback of layering is that one layer may duplicate lower-layer functionality. A second potential drawback is that functionality at one layer may need information (for example, a timestamp value) that is present only in another layer; this violates the goal of separation of layers. When taken together, the protocols of the various layers are called the protocol stack. The Internet protocol stack consists of five layers: the physical, link, network, transport, and application layers. ❖ Application Layer - message The application layer is where network applications and their application-layer protocols reside. The Internet’s application layer includes many protocols, such as the HTTP protocol (which provides for Web document request and transfer), SMTP (which provides for the transfer of e-mail messages), and FTP (which provides for the transfer of files between two end systems). An application-layer protocol is distributed over multiple end systems, with the application in one end system using the protocol to exchange packets of information with the application in another end system. We’ll refer to this packet of information at the application layer as a message. ❖ Transport Layer - segment The Internet’s transport layer transports application-layer messages between application endpoints. In the Internet there are two transport protocols, TCP and UDP, either of which can transport applicationlayer messages. TCP provides a connection-oriented service to its applications. This service includes guaranteed delivery of application-layer messages to the destination and flow control (that is, sender/receiver speed matching). TCP also breaks long messages into shorter segments and provides a congestion-control mechanism, so that a source throttles its transmission rate when the network is congested. The UDP protocol provides a connectionless service to its applications. This is a no-frills service that provides no reliability, no flow control, and no congestion control. In this book, we’ll refer to a transport-layer packet as a segment. ❖ Network Layer The Internet’s network layer is responsible for moving network-layer packets known as datagrams from one host to another. The Internet transport-layer protocol (TCP or UDP) in a source host passes a transport-layer segment and a destination address to the network layer. The network layer then provides the service of delivering the segment to the transport layer in the destination host. The Internet’s network layer includes the celebrated IP protocol, which defines the fields in the datagram as well as how the end systems and routers act on these fields. (IP is the glue that binds the Internet together) ❖ Link Layer - frames The Internet’s network layer routes a datagram through a series of routers between the source and destination. To move a packet from one node (host or router) to the next node in the route, the network layer relies on the services of the link layer. In particular, at each node, the network layer passes the datagram down to the link layer, which delivers the datagram to the next node along the route. At this next node, the link layer passes the datagram up to the network layer. The services provided by the link layer depend on the specific link-layer protocol that is employed over the link. Ex: e Ethernet, WiFi, and the cable access network’s DOCSIS protocol. As datagrams typically need to traverse several links to travel from source to destination, a datagram may be handled by different link-layer protocols at different links along its route. The network layer will receive a different service from each of the different link-layer protocols ❖ Physical Layer The job of the physical layer is to move the individual bits within the frame from one node to the next. The protocols in this layer are again link dependent and further depend on the actual transmission medium of the link (for example, twisted-pair copper wire, single-mode fiber optics). A bit is moved across the link in a different way. - The OSI Model the International Organization for Standardization (ISO) proposed that computer networks be organized around seven layers, called the Open Systems Interconnection (OSI) model. The seven layers of the OSI reference mode, are: application layer, presentation layer, session layer, transport layer, network layer, data link layer, and physical layer. - presentation layer The role of the presentation layer is to provide services that allow communicating applications to interpret the meaning of data exchanged. These services include data compression and data encryption (which are self-explanatory) as well as data description. - session layer The session layer provides for delimiting and synchronization of data exchange, including the means to build a checkpointing and recovery scheme. It’s up to the application developer to decide if a service is important, and if the service is important, it’s up to the application developer to build that functionality into the application. - Encapsulation shows the physical path that data takes down a sending end system’s protocol stack, up and down the protocol stacks of an intervening link-layer switch and router, and then up the protocol stack at the receiving end system. Similar to end systems, routers and link-layer switches organize their networking hardware and software into layers. But routers and link-layer switches do not implement all of the layers in the protocol stack; they typically implement only the bottom layers. Link-layer switches implement layers 1 and 2; routers implement layers 1 through 3. The concept of encapsulation: At the sending host, an application-layer message (M)is passed to the transport layer. In the simplest case, the transport layer takes the message and appends additional information (so-called transport-layer header information, Ht) that will be used by the receiver-side transport layer. The application-layer message and the transport-layer header information together constitute the transport-layer segment. The transport-layer segment thus encapsulates the application-layer message. The added information might include information allowing the receiver-side transport layer to deliver the message up to the appropriate application, and error-detection bits that allow the receiver to determine whether bits in the message have been changed in route. The transport layer then passes the segment to the network layer, which adds network-layer header information (Hn) such as source and destination end system addresses, creating a network-layer datagram. The datagram is then passed to the link layer, which will add its own link-layer header information and create a link-layer frame. Thus, we see that at each layer, a packet has two types of fields: header fields and a payload field. A useful analogy here is the sending of an interoffice memo from one corporate branch office to another via the public postal service. NETWORK UNDER ATTACK The field of network security is about how the bad guys can attack computer networks and about how we can defend networks against those attacks, or better yet, design new architectures that are immune to such attacks in the first place - The Bad Guys Can Put Malware into Your Host Via the Internet We attach devices to the Internet because we want to receive/send data from/to the Internet. Internet stuff - collectively known as malware—can enter and infect our devices. Once malware infects our device it can do all kinds of devious things, including deleting our files and installing spyware that collects our private information, such as social security numbers, passwords, and keystrokes, and then sends this (over the Internet, of course!) back to the bad guys. Our compromised host may also be enrolled in a network of thousands of similarly compromised devices, collectively known as a botnet, which the bad guys control and leverage for spam e-mail distribution or distributed denial-of-service attacks (soon to be discussed) against targeted hosts. Much of the malware out there today is self-replicating: once it infects one host, from that host it seeks entry into other hosts over the Internet, and from the newly infected hosts, it seeks entry into yet more hosts. In this manner, self-replicating malware can spread exponentially fast. Malware can spread in the form of a virus or a worm. - Viruses are malware that require some form of user interaction to infect the user’s device. (email) - Worms are malware that can enter a device without any explicit user interaction. For example, a user may be running a vulnerable network application to which an attacker can send malware. In some cases, without any user intervention, the application may accept the malware from the Internet and run it, creating a worm. The worm in the newly infected device then scans the Internet, searching for other hosts running the same vulnerable network application. When it finds other vulnerable hosts, it sends a copy of itself to those hosts. - The Bad Guys Can Attack Servers and Network Infrastructure Another broad class of security threats are known as denial-of-service (DoS) attacks. As the name suggests, a DoS attack renders a network, host, or other piece of infrastructure unusable by legitimate users. Most Internet DoS attacks fall into one of three categories: - Vulnerability attack: This involves sending a few well-crafted messages to a vulnerable application or operating system running on a targeted host. If the right sequence of packets is sent to a vulnerable application or operating system, the service can stop or, worse, the host can crash. - Bandwidth flooding: The attacker sends a deluge of packets to the targeted host—so many packets that the target’s access link becomes clogged, preventing legitimate packets from reaching the server. - Connection flooding: The attacker establishes a large number of half-open or fully open TCP connections at the target host. The host can become so bogged down with these bogus connections that it stops accepting legitimate connections. It’s evident that if the server has an access rate of R bps, then the attacker will need to send traffic at a rate of approximately R bps to cause damage. If R is very large, a single attack source may not be able to generate enough traffic to harm the server. if all the traffic emanates from a single source, an upstream router may be able to detect the attack and block all traffic from that source before the traffic gets near the server. In a distributed DoS (DDoS) attack, the attacker controls multiple sources and has each source blast traffic at the target. With this approach, the aggregate traffic rate across all the controlled sources needs to be approximately R to cripple the service. - The Bad Guys Can Sniff Packets While ubiquitous Internet access is extremely convenient and enables marvelous new applications for mobile users, it also creates a major security vulnerability—by placing a passive receiver in the vicinity of the wireless transmitter, that receiver can obtain a copy of every packet that is transmitted! These packets can contain all kinds of sensitive information, including passwords, social security numbers, trade secrets, and private personal messages. A passive receiver that records a copy of every packet that flies by is called a packet sniffer. In wired broadcast environments, as in many Ethernet LANs, a packet sniffer can obtain copies of broadcast packets sent over the LAN. Because packet sniffers are passive—that is, they do not inject packets into the channel—they are difficult to detect. So, when we send packets into a wireless channel, we must accept the possibility that some bad guy may be recording copies of our packets. As you may have guessed, some of the best defenses against packet sniffing involve cryptography. - The Bad Guys Can Masquerade as Someone You Trust The ability to inject packets into the Internet with a false source address is known as IP spoofing, and is but one of many ways in which one user can masquerade as another user. To solve this problem, we will need end-point authentication, that is, a mechanism that will allow us to determine with certainty if a message originates from where we think it does. How the Internet got to be such an insecure place in the first place? The answer, in essence, is that the Internet was originally designed to be that way, based on the model of “a group of mutually trusting users attached to a transparent network”, a model in which there is no need for security. Many aspects of the original Internet architecture deeply reflect this notion of mutual trust. STORIA DI INTERNET 1961-1972: sviluppo della commutazione di pacchetto: - 1961: Kleinrock - la teoria delle code dimostra l’efficacia dell’approccio a commutazione di pacchetto - 1964: Baran - uso della commutazione di pacchetto nelle reti militari - 1967: il progetto ARPAnet viene concepito dall’Advanced Research Projects Agency - 1969: primo nodo operativo ARPAnet - 1972: - Dimostrazione pubblica di ARPAnet - NCP (Network Control Protocol), primo protocollo tra nodi - Primo programma di posta elettronica - ARPAnet ha 15 nodi 1972-1980: Internetworking e reti proprietarie - 1970: rete satellitare ALOHAnet che collega le università delle Hawaii - 1974: Cerf e Kahn - architettura per l’interconnessione delle reti - 1976: Ethernet allo Xerox PARC - Fine anni ’70: architetture proprietarie: DECnet, SNA, XNA - Fine anni ’70: commutazione di pacchetti: ATM ante-litteram - 1979: ARPAnet ha 200 nodi - Le linee guida di Cerf e Kahn sull’internetworking: - minimalismo, autonomia: per collegare le varie reti non occorrono cambiamenti interni - modello di servizio best effort - router stateless - controllo decentralizzato definiscono l’attuale architettura di Internet 1980-1990: nuovi protocolli, proliferazione delle reti - 1983: rilascio di TCP/IP - 1982: definizione del protocollo SMTP per la posta elettronica - 1983: definizione del DNS per la traduzione degli indirizzi IP - 1985: definizione del protocollo FTP - 1988: controllo della congestione TCP - nuove reti nazionali: Csnet, BITnet, NSFnet, Minitel - 100.000 host collegati 1990-2000: commercializzazione, Web, nuove applicazioni - Primi anni ’90: ARPAnet viene dismessa - 1991: NSF lascia decadere le restrizioni sull’ uso commerciale di NSFnet - Primi anni ’90: il Web - Ipertestualità [Bush 1945, Nelson 1960’s] - HTML, HTTP: Berners-Lee - 1994: Mosaic, poi Netscape - Fine ’90 : commercializzazione del Web - Fine anni ’90 – 2007: - arrivano le “killer applications”: messaggistica istantanea, condivisione di file P2P - sicurezza di rete - 50 milioni di host, oltre 100 milioni di utenti - velocità nelle dorsali dell’ordine di Gbit/s 2008: - 500 milioni di host - Voice, Video over IP - Applicazioni P2P: BitTorrent (condivisione di file) Skype (VoIP), PPLive (video)... - Più applicazioni: YouTube, gaming - wireless, mobilità 2012: - ~2 miliardi di utenti - Cloud computing - Apps e Social Networks - Più dispositivi: smartphone, tablet, l’Internet of Things (IoT) - Sicurezza e privacy 2020: - 4.39 miliardi di utenti - Internet “sparisce” (IoT) - 5G, Cloud/Fog/MEC - Sicurezza e privacy 2. APPLICATION LAYER Network applications are the raisons d’être of a computer network—if we couldn’t conceive of any useful applications, there wouldn’t be any need for networking infrastructure and protocols to support them. Principles of Network Applications At the core of network application development is writing programs that run on different end systems and communicate with each other over the network (For example: the browser program running in the user’s host (desktop, laptop, tablet, smartphone, and so on); and the Web server program running in the Web server host). when developing your new application, you need to write software that will run on multiple end systems. This software could be written, for example, in C, Java, or Python. Importantly, you do not need to write software that runs on network-core devices, such as routers or link-layer switches. Even if you wanted to write application software for these network-core devices, you wouldn’t be able to do so. network-core devices do not function at the application layer but instead function at lower layers—specifically at the network layer and below. Network Application Architectures The application architecture (different from the network architecture), is designed by the application developer and dictates how the application is structured over the various end systems. In choosing the application architecture, an application developer will likely draw on one of the two predominant architectural paradigms used in modern network applications: the client-server architecture or the peer-to-peer (P2P) architecture. - In a client-server architecture, there is an always-on host, called the server, which services requests from many other hosts, called clients. When a Web server receives a request for an object from a client host, it responds by sending the requested object to the client host. With the client-server architecture, clients do not directly communicate with each other; for example, in the Web application, two browsers do not directly communicate. Another characteristic of the client-server architecture is that the server has a fixed, well-known address, called an IP address (which we’ll discuss soon), a client can always contact the server by sending a packet to the server’s IP address. Often in a client-server application, a single-server host is incapable of keeping up with all the requests from clients, For this reason, a data center, housing a large number of hosts, is often used to create a powerful virtual server. The most popular Internet services—such as search engines(google), Internet commerce (Amazon,), Webbased e-mail (e.g., Gmail a), social networking, employ one or more data centers. Server - Host sempre attivo - Indirizzo permanente - Come scalare? Client - Comunica con il server - Può disconnettersi temporaneamente - Può avere un indirizzo dinamico - Non comunica direttamente con altri client - In a P2P architecture, there is minimal (or no) reliance on dedicated servers in data centers. Instead the application exploits direct communication between pairs of intermittently connected hosts, called peers. The peers are not owned by the service provider, but are instead desktops and laptops controlled by users, with most of the peers residing in homes, universities, and offices. Because the peers communicate without passing through a dedicated server, the architecture is called peer-to-peer. These applications include file sharing (BitTorrent), peer-assisted download acceleration (Xunlei), and Internet telephony and video conference (Skype) We mention that some applications have hybrid architectures, combining both client-server and P2P elements. For example, for many instant messaging applications, servers are used to track the IP addresses of users, but user-touser messages are sent directly between user hosts. - Non c’è un server sempre attivo - Coppie arbitrarie di host (peer) comunicano direttamente tra loro - I peer non devono necessariamente essere sempre attivi, e cambiano indirizzo IP Facilmente scalabile Difficile da gestire One of the most compelling features of P2P architectures is their self-scalability. For example, in a P2P file-sharing application, although each peer generates workload by requesting files, each peer also adds service capacity to the system by distributing files to other peers. P2P architectures are also cost effective, since they normally don’t require significant server infrastructure and server bandwidth (in contrast with clients-server designs with datacenters). However, P2P applications face challenges of security, performance, and reliability due to their highly decentralized structure. IBRIDI: - Skype ❖ Applicazione P2P di Voice over IP ❖ Server centralizzato: ricerca indirizzi della parte remota ❖ Connessione client-client: diretta (non attraverso il server) - Messaggistica istantanea ❖ La chat tra due utenti è del tipo P2P ❖ Individuazione della presenza/location centralizzata: l’utente registra il suo indirizzo IP sul server centrale quando è disponibile online l’utente contatta il server centrale per conoscere gli indirizzi IP dei suoi amic Processes Communicating Processes on two different end systems communicate with each other by exchanging messages across the computer network. A sending process creates and sends messages into the network; a receiving process receives these messages and possibly responds by sending messages back - Client and Server Processes A network application consists of pairs of processes that send messages to each other over a network. In a P2P file-sharing system, a file is transferred from a process in one peer to a process in another peer. For each pair of communicating processes, we typically label one of the two processes as the client and the other process as the server. With the Web, a browser is a client process and a Web server is a server process. With P2P file sharing, the peer that is downloading the file is labeled as the client, and the peer that is uploading the file is labeled as the server. In the context of a communication session between a pair of processes, the process that initiates the communication (that is, initially contacts the other process at the beginning of the session) is labeled as the client. The process that waits to be contacted to begin the session is the server - The Interface Between the Process and the Computer Network Most applications consist of pairs of communicating processes, with the two processes in each pair sending messages to each other. Any message sent from one process to another must go through the underlying network. A process sends messages into, and receives messages from, the network through a software interface called a socket.(door) A socket is the interface between the application layer and the transport layer within a host. It is also referred to as the Application Programming Interface (API) between the application and the network, since the socket is the programming interface with which network applications are built. The application developer has control of everything on the application-layer side of the socket but has little control of the transport-layer side of the socket. The only control that the application developer has on the transport-layer side is: 1. he choice of transport protocol 2. perhaps the ability to fix a few transport-layer parameters such as maximum buffer and maximum segment sizes - Addressing Processes In order to send postal mail to a particular destination, the destination needs to have an address. Similarly, in order for a process running on one host to send packets to a process running on another host, the receiving process needs to have an address. To identify the receiving process, two pieces of information need to be specified: (1) the address of the host and (2) an identifier that specifies the receiving process in the destination host. In the Internet, the host is identified by its IP address (a 32-bit quantity that we can think of as uniquely identifying the host). The sending process must also identify the receiving process (more specifically, the receiving socket) running in the host. This information is needed because in general a host could be running many network applications. A destination port number serves this purpose. - Transport Services Available to Applications Recall that a socket is the interface between the application process and the transport-layer protocol. The application at the sending side pushes messages through the socket. At the other side of the socket, the transport-layer protocol has the responsibility of getting the messages to the socket of the receiving process. What are the services that a transport-layer protocol can offer to applications invoking it? We can broadly classify the possible services along four dimensions: reliable data transfer, throughput, timing, and security. - Reliable Data Transfer packets can get lost within a computer network.(a packet can overflow a buffer in a router, or can be discarded by a host or router after having some of its bits corrupted) To support these applications, something has to be done to guarantee that the data sent by one end of the application is delivered correctly and completely to the other end of the application. If a protocol provides such a guaranteed data delivery service, it is said to provide reliable data transfer. One important service that a transport-layer protocol can potentially provide to an application is process-to-process reliable data transfer. When a transport-layer protocol doesn’t provide reliable data transfer, some of the data sent by the sending process may never arrive at the receiving process. This may be acceptable for loss-tolerant applications, - most notably multimedia applications such as conversational audio/video that can tolerate some amount of data loss - glitch - Throughput In the context of a communication session between two processes along a network path, is the rate at which the sending process can deliver bits to the receiving process. The available throughput can fluctuate with time. These observations lead to another natural service that a transport-layer protocol could provide, namely, guaranteed available throughput at some specified rate. With such a service, the application could request a guaranteed throughput of r bits/sec, and the transport protocol would then ensure that the available throughput is always at least r bits/sec. Such a guaranteed throughput service would appeal to many applications. Applications that have throughput requirements are said to be bandwidth-sensitive applications. While bandwidth-sensitive applications have specific throughput requirements, elastic applications can make use of as much, or as little, throughput as happens to be available. (Electronic mail, file transfer, and Web transfers) - Timing A transport-layer protocol can also provide timing guarantees. As with throughput guarantees, timing guarantees can come in many shapes and forms. An example guarantee might be that every bit that the sender pumps into the socket arrives at the receiver’s socket no more than 100 msec later. (dalay) - Internet telephony, virtual environments, teleconferencing, and multiplayer games. - Security Finally, a transport protocol can provide an application with one or more security services. For example,in the sending host, a transport protocol can encrypt all data transmitted by the sending process, and in the receiving host, the transport-layer protocol can decrypt the data before delivering the data to the receiving process. - Transport Services Provided by the Internet The Internet (and, more generally, TCP/IP networks) makes two transport protocols available to applications, UDP and TCP. When you (as an application developer) create a new network application for the Internet, one of the first decisions you have to make is whether to use UDP or TCP. - TCP Services The TCP service model includes a connection-oriented service and a reliable data transfer service.When an application invokes TCP as its transport protocol, the application receives both of these services from TCP. - Connection-oriented service: TCP has the client and server exchange transport-layer control information with each other before the application-level messages begin to flow. This so-called handshaking procedure alerts the client and server, allowing them to prepare for an onslaught of packets. After the handshaking phase, a TCP connection is said to exist between the sockets. The connection is a full-duplex connection in that the two processes can send messages to each other over the connection at the same time. When the application finishes sending messages, it must tear down the connection. - Reliable data transfer service: The communicating processes can rely on TCP to deliver all data sent without error and in the proper order. When one side of the application passes a stream of bytes into a socket, it can count on TCP to deliver the same stream of bytes to the receiving socket, with no missing or duplicate bytes. TCP also includes a congestion-control mechanism, a service for the general welfare of the Internet rather than for the direct benefit of the communicating processes. The TCP congestion-control mechanism throttles a sending process (client or server) when the network is congested between sender and receiver. - UDP Services: UDP is a no-frills, lightweight transport protocol, providing minimal services. UDP is connectionless, so there is no handshaking before the two processes start to communicate. UDP provides an unreliable data transfer service—that is, when a process sends a message into a UDP socket, UDP provides no guarantee that the message will ever reach the receiving process. Furthermore, messages that do arrive at the receiving process may arrive out of order. UDP does not include a congestion-control mechanism, so the sending side of UDP can pump data into the layer below (the network layer) at any rate it pleases. For non-real-time applications, lower delay is always preferable to higher delay, but no tight constraint is placed on the end-to-end delays. - Security A transport protocol can provide an application with one or more security services. ex: the transport-layer protocol can decrypt the data before delivering the data to the receiving process. A transport protocol can also provide other security services in addition to confidentiality, including data integrity and end-point authentication, - Transport Services Provided by the Internet The Internet (and, more generally, TCP/IP networks) makes two transport protocols available to applications, UDP and TCP.Each of these protocols offers a different set of services to the invoking applications. - TCP Services: The TCP service model includes a connection-oriented service and a reliable data transfer service. ❖ Connection-oriented service: service. TCP has the client and server exchange transport-layer control information with each other before the application-level messages begin to flow. This so-called handshaking procedure alerts the client and server, allowing them to prepare for an onslaught of packets. After the handshaking phase, a TCP connection is said to exist between the sockets of the two processes. The connection is a full-duplex connection in that the two processes can send messages to each other over the connection at the same time. When the application finishes sending messages, it must tear down the connection. ❖ Reliable data transfer service: The communicating processes can rely on TCP to deliver all data sent without error and in the proper order. When one side of the application passes a stream of bytes into a socket, it can count on TCP to deliver the same stream of bytes to the receiving socket, with no missing or duplicate bytes. TCP also includes a congestion-control mechanism, a service for the general welfare of the Internet rather than for the direct benefit of the communicating processes. The TCP congestion-control mechanism throttles a sending process (client or server) when the network is congested between sender and receiver. - UDP Services UDP is a no-frills, lightweight transport protocol, providing minimal services. UDP is connectionless, so there is no handshaking before the two processes start to communicate. UDP provides an unreliable data transfer service—that is, when a process sends a message into a UDP socket, UDP provides no guarantee that the message will ever reach the receiving process. Furthermore, messages that do arrive at the receiving process may arrive out of order. UDP does not include a congestion-control mechanism, so the sending side of UDP can pump data into the layer below at any rate it pleases. - Services Not Provided by Internet Transport Protocols We have already noted that TCP provides reliable end-to-end data transfer. And we also know that TCP can be easily enhanced at the application layer with SSL to provide security services. Today’s Internet can often provide satisfactory service to time-sensitive applications, but it cannot provide any timing or throughput guarantees. TCP. These applications have chosen TCP primarily because TCP provides reliable data transfer, guaranteeing that all data will eventually get to its destination. Many firewalls are configured to block (most types of) UDP traffic, Internet telephony applications often are designed to use TCP as a backup if UDP communication fails. - Application-Layer Protocols: An application-layer protocol defines how an application’s processes, running on different end systems, pass messages to each other. In particular, an application-layer protocol defines: - The types of messages exchanged, for example, request messages and response messages - The syntax of the various message types, such as the fields in the message and how the fields are delineated - The semantics of the fields, that is, the meaning of the information in the fields - Rules for determining when and how a process sends messages and responds to messages Some application-layer protocols are specified in RFCs and are therefore in the public domain. (ex: the Web’s application-layer protocol, HTTP is available as an RFC. - contrariamente: Skype uses proprietary application-layer protocols) It is important to distinguish between network applications and application-layer protocols. An application-layer protocol is only one piece of a network application. ex: The Web is a client-server application that allows users to obtain documents from Web servers on demand. The Web application consists of many components, including a standard for document formats (that is, HTML). THE WEB AND HTTP - Overview of HTTP: The HyperText Transfer Protocol (HTTP), the Web’s application-layer protocol, is at the heart of the Web. HTTP is implemented in two programs: a

Use Quizgecko on...
Browser
Browser