Document Details

FoolproofTopaz

Uploaded by FoolproofTopaz

Jefferson

Tags

computer networks networking fundamentals OSI model

Full Transcript

Chapter 1 Domain 1.0: Networking Fundamentals THE FOLLOWING COMPTIA NETWORK+ OBJECTIVES ARE COVERED IN THIS CHAPTER: 1.1 Compare and contrast the Open Systems Interconnection (OSI) model layers and encapsulation concepts. OSI model Layer 1 - Physical Layer 2...

Chapter 1 Domain 1.0: Networking Fundamentals THE FOLLOWING COMPTIA NETWORK+ OBJECTIVES ARE COVERED IN THIS CHAPTER: 1.1 Compare and contrast the Open Systems Interconnection (OSI) model layers and encapsulation concepts. OSI model Layer 1 - Physical Layer 2 - Data link Layer 3 - Network Layer 4 - Transport Layer 5 - Session Layer 6 - Presentation Layer 7 - Application Data encapsulation and decapsulation within the OSI model context Ethernet header Internet Protocol (IP) header Transmission Control Protocol (TCP)/User Datagram Protocol (UDP) headers TCP flags Payload Maximum transmission unit (MTU) 1.2 Explain the characteristics of network topologies and network types. Mesh Star/hub-and-spoke Bus Ring Hybrid Network types and characteristics Peer-to-peer Client-server Local area network (LAN) Metropolitan area network (MAN) Wide area network (WAN) Wireless local area network (WLAN) Personal area network (PAN) Campus area network (CAN) Storage area network (SAN) Software-defined wide area network (SDWAN) Multiprotocol label switching (MPLS) Multipoint generic routing encapsulation (mGRE) Service-related entry point Demarcation point Smartjack Virtual network concepts vSwitch Virtual network interface card (vNIC) Network function virtualization (NFV) Hypervisor Provider links Satellite Digital subscriber line (DSL) Cable Leased line Metro-optical 1.3 Summarize the types of cables and connectors and explain which is the appropriate type for a solution. Copper Twisted pair Cat 5 Cat 5e Cat 6 Cat 6a Cat 7 Cat 8 Coaxial/RG-6 Twinaxial Termination standards TIA/EIA-568A TIA/EIA-568B Fiber Single-mode Multimode Connector types Local connector (LC), straight tip (ST), subscriber connector (SC), mechanical transfer (MT), registered jack (RJ) Angled physical contact (APC) Ultra-physical contact (UPC) RJ11 RJ45 F-type connector Transceivers/media converters Transceiver type Small form-factor pluggable (SFP) Enhanced form-factor pluggable (SFP+) Quad small form-factor pluggable (QSFP) Enhanced quad small form-factor pluggable (QSFP+) Cable management Patch panel/patch bay Fiber distribution panel Punchdown block 66 110 Krone Bix Ethernet standards Copper 10BASE-T 100BASE-TX 1000BASE-T 10GBASE-T 40GBASE-T Fiber 100BASE-FX 100BASE-SX 1000BASE-SX 1000BASE-LX 10GBASE-SR 10GBASE-LR Coarse wavelength division multiplexing (CWDM) Dense wavelength division multiplexing (DWDM) Bidirectional wavelength division multiplexing (WDM) 1.4 Given a scenario, configure a subnet and use appropriate IP addressing schemes. Public vs. private RFC1918 Network address translation (NAT) Port address translation (PAT) IPv4 vs. IPv6 Automatic Private IP Addressing (APIPA) Extended unique identifier (EUI-64) Multicast Unicast Anycast Broadcast Link local Loopback Default gateway IPv4 subnetting Classless (variable-length subnet mask) Classful A B C D E Classless Inter-Domain Routing (CIDR) notation IPv6 concepts Tunneling Dual stack Shorthand notation Router advertisement Stateless address autoconfiguration (SLAAC) Virtual IP (VIP) Subinterfaces 1.5 Explain common ports and protocols, their application, and encrypted alternatives. File Transfer Protocol (FTP) 20/21 Secure Shell (SSH) 22 Secure File Transfer Protocol (SFTP) 22 Telnet 23 Simple Mail Transfer Protocol (SMTP) 25 Domain Name System (DNS) 53 Dynamic Host Configuration Protocol (DHCP) 67/68 Trivial File Transfer Protocol (TFTP) 69 Hypertext Transfer Protocol (HTTP) 80 Post Office Protocol v3 (POP3) 110 Network Time Protocol (NTP) 123 Internet Message Access Protocol (IMAP) 143 Simple Network Management Protocol (SNMP) 161/162 Lightweight Directory Access Protocol (LDAP) 389 Hypertext Transfer Protocol Secure (HTTPS) [Secure Sockets Layer (SSL)] 443 HTTPS [Transport Layer Security (TLS)] 443 Server Message Block (SMB) 445 Syslog 514 SMTP TLS 587 Lightweight Directory Access Protocol (over SSL) (LDAPS) 636 IMAP over SSL 993 POP3 over SSL 995 Structured Query Language (SQL) Server 1433 SQLnet 1521 MySQL 3306 Remote Desktop Protocol (RDP) 3389 Session Initiation Protocol (SIP) 5060/5061 IP protocol types Internet Control Message Protocol (ICMP) TCP UDP Generic Routing Encapsulation (GRE) Internet Protocol Security (IPSec) Authentication Header (AH)/Encapsulating Security Payload (ESP) Connectionless vs. connection-oriented 1.6 Explain the use and purpose of network services. DHCP Scope Exclusion ranges Reservation Dynamic assignment Static assignment Lease time Scope options Available leases DHCP relay IP helper/UDP forwarding DNS Record types Address (A) Canonical name (CNAME) Mail exchange (MX) Authentication, authorization, accounting, auditing (AAAA) Start of authority (SOA) Pointer (PTR) Text (TXT) Service (SRV) Name server (NS) Global hierarchy Root DNS servers Internal vs. external Zone transfers Authoritative name servers Time to live (TTL) DNS caching Reverse DNS/reverse lookup/forward lookup Recursive lookup/iterative lookup NTP Stratum Clients Servers 1.7 Explain basic corporate and datacenter network architecture. Three-tiered Core Distribution/aggregation layer Access/edge Software-defined networking Application layer Control layer Infrastructure layer Management plane Spine and leaf Software-defined network Top-of-rack switching Backbone Traffic flows North-South East-West Branch office vs. on-premises datacenter vs. colocation Storage area networks Connection types Fibre Channel over Ethernet (FCoE) Fibre Channel Internet Small Computer Systems Interface (iSCSI) 1.8 Summarize cloud concepts and connectivity options. Deployment models Public Private Hybrid Community Service models Software as a service (SaaS) Infrastructure as a service (IaaS) Platform as a service (PaaS) Desktop as a service (DaaS) Infrastructure as code Automation/orchestration Connectivity options Virtual private network (VPN) Private-direct connection to cloud provider Multitenancy Elasticity Scalability Security implications When I first started on my career path as a network professional 25 years ago, I began by learning the basic concepts of networking by reading a book similar to this one. The original networking concepts have not really changed all that much. Some concepts have been replaced by new ones, and some have just become obsolete. This is because networks have evolved and networking needs have changed over the years. Over the course of your career, you too will see similar changes. However, most of the concepts you learn for the objectives in this domain will become your basis for understanding current and future networks. When learning network concepts, you might feel you need to know everything before you can learn one thing. This can be an overwhelming feeling for anyone. However, I recommend that you review the sections again once you've read the entire chapter. Not only does this help with review and memorization, but the pieces will make more sense once you see the entire picture. For more detailed information on Domain 1's topics, please see CompTIA Network+ Study Guide, 5th ed. (978-1-119-81163-3) or CompTIA Network+ Certification Kit, 5th ed. (978-1-119-43228-9), published by Sybex. 1.1 Compare and contrast the Open Systems Interconnection (OSI) model layers and encapsulation concepts. The movement of data from one network node to another is a very complex task, especially when you try to perceive everything happening all at once. The communications between various hardware vendors is also mind boggling. Thankfully, the OSI model was created to simplify and standardize the interconnection of hardware vendors. In this section you will learn all about the OSI model as it pertains to network communications. OSI Model The Open Systems Interconnection (OSI) reference model was created by the International Organization for Standardization (ISO) to standardize network connectivity between applications, devices, and protocols. Before the OSI was created, every system was proprietary. Of course, this was back in the days of mainframes and early microcomputers! Today, the OSI layers are used to build standards that allow for interoperability between different vendors. Besides interoperability, the OSI layers have many other advantages. The following is a list of the common networking advantages the OSI layers provide: The reference model helps facilitate communications between various types of hardware and software. The reference model prevents a change in one layer from affecting the other layers. The reference model allows for multi-vendor development of hardware and software based on network standards. The reference model encourages industry standardization because it defines functions of each layer of the OSI model. The reference model divides a complex communications process into smaller pieces to assist with design, development, and troubleshooting. Network protocols and connectivity options can be changed without affecting applications. The last advantage is what I consider the most important for any network administrator. The network communications process is a complicated process. However, when we break the process down into smaller pieces, we can understand each piece as it relates to the entire process. When you understand what happens at each layer of the OSI model, you will have a better grasp of how to troubleshoot network applications and network problems. When I first learned the OSI layers over 25 years ago, I never thought I would use this knowledge—but I could not be as successful as I am without understanding this layered approach. When we review the upper layers of the OSI (Application, Presentation, and Session), you will not have as deep an understanding as you do of the lower layers. The upper layers are generally where developers create applications, whereas the lower layers are where network administrators support the applications. In Figure 1.1 you can see the seven layers of the OSI model. The top three layers are where applications operate. The Transport and Network layers are where TCP/IP operates. The Data Link and Physical layers are where connectivity technology, such as wireless or Ethernet, operates. These groupings are considered macro layers and will help you understand the OSI layers better as we progress through each individual layer. FIGURE 1.1 The layers of the OSI Application Layer The Application layer (layer 7) is the highest layer of the communication process. It is the layer that provides the user interface to the user and often the beginning of the communication process. Applications like Edge or Internet Explorer have an interface for the user, and they are considered network applications. Applications such as Microsoft Word do not communicate with the network and are therefore considered end-user applications or stand-alone applications. Although you can store your Word document on the network, the purpose is not to facilitate network communications such as Edge or Internet Explorer do. There is a running joke in networking that some problems are layer 8 problems; that would be the user. The Application layer defines the role of the application, since all network applications are generally either client or server. A request for information is started at the Application layer through one of three methods: a graphical user interface (GUI), a console application, or an application programming interface (API). These terms are synonymous with the Application layer. A request for information can begin with a click of a mouse, a command in an application, or via an API call. The Application layer also defines the purpose of the application. A file transfer application will differ significantly in design from an instant messaging application. When a programmer starts to design a network application, this is the layer the programmer begins with because it will interface with the user. As firewalls have advanced throughout the years, it is now common to find firewalls operating at layer 7. Chapter 2, “Domain 2.0: Network Implementations,” covers next-generation firewall (NGFW) layer 7 firewalls that operate at these higher layers. Many events begin at the Application layer. The following are some common application layer events, but in no way is this a complete list. The list of application protocols—and the events that begin at this layer—grows by the minute. Sending email Remote access Web surfing File transfer Instant messenger VoIP calls Presentation Layer The Presentation layer (layer 6) is the layer that presents data to the Application layer. This layer is responsible for encryption/decryption, translation, and compression/decompression. When a stream of data comes from the lower layers, this layer is responsible for formatting the data and converting it back to the original intended application data. An example is a web request to a web server for an encrypted web page via Transport Layer Security (TLS), which was formerly the Secure Sockets Layer (SSL) protocol. The web page is encrypted at the web server and sent to the client. When the client receives the page, it is decrypted and sent to the Application layer as data. This process is bidirectional, and it is important to note that the presentation layer on both the client and server make a connection to each other. This is called peer-layer communications, and it happens at all layers of the OSI model in different ways. An example of translation services that are performed at this layer is converting Extended Binary Coded Decimal Interchange Code (EBCDIC) data to American Standard Code for Information Interchange (ASCII) or converting ASCII to Unicode. Examples of compression and decompression, often referred to as codecs, are MP3 to network streaming protocols and H.264 video to streaming protocols. In addition, JPEG, GIF, PICT, and TIFF operate at the Presentation layer by compressing and decompressing image formats when used in conjunction with a network application like your web browser. Session Layer The Session layer (layer 5) is responsible for the setup, management, and teardown of a session between two computers. This layer is also responsible for dialogue control. Application developers must decide how their application will function with the network at this layer in respect to the network conversation. There are three basic forms of communications a network application can use at the Session layer: Half-duplex is a two-way communication between two hosts where only one side can communicate at a time. This is similar to a walkie-talkie and is how many protocols operate. A web browser will request a page from the web server and the web server will return the page. Then the web browser asks for the other elements contained in the Hypertext Markup Language (HTML) web page. In recent years, web developers have made half-duplex seem like a full-duplex conversation with Ajax (Asynchronous JavaScript and eXtensible Markup Language, or XML) requests by sending each keystroke and querying a response. However, it is still a half-duplex conversation. Full-duplex is two-way communication between two hosts where both sides can communicate simultaneously. Not only is this type of communication similar to a telephone call, but it is used by VoIP to make telephone calls over a network. This type of dialogue control is extremely tough for programmers since they must program for real-time events. Simplex is a one-way communication between two hosts. This type of communication is similar to tuning to a radio station—you do not have any control of the content or communications received. Transport Layer The Transport layer (layer 4) is the first layer that we network administrators are responsible for maintaining. A good grasp of the upper three layers is important so that we can properly troubleshoot these lower layers. The Transport layer for TCP/IP contains two protocols that you will learn more about in objective 1.5, “Explain common ports and protocols, their application, and encrypted alternatives.” The Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) protocols operate at the Transport layer, and the programmer of the network application must decide which to program against. At this layer, the operating system presents the application with a socket to communicate with on the network. In the Windows operating system, it is called a Winsock; in other operating systems like Linux, it is called a socket. When we discuss the socket in the context of networking, it is called a port. All of these terms are basically interchangeable. I will refer to it as a port for the remainder of this section. When a network server application starts up, it will bind to the port, as shown in Figure 1.2. The server application will then listen for requests on this port. The programmer will choose which port and protocol to use for their server application. Because UDP/TCP and the port number define the application, it is common to find firewalls operating at this layer to allow or block application access. FIGURE 1.2 Transport server port binding So far I have discussed how the server application listens for requests. Now I will explain how client applications use ports for requests. When a client needs to request information from a server, the client application will bind to a port dynamically available above 1023 as the source port. This dynamic allocation and short lifespan of the port number to facilitate network communications is also referred to as an ephemeral port numbers. On the other hand, port number 1023 and below are defined in RFC 3232 (or just see www.iana.org). These lower port numbers are called well-known port numbers, and they're reserved for servers. In the example in Figure 1.3, a web browser is creating a request for three elements on a web page to the server. The client will bind to port numbers 1024, 1025, and 1026 to the web browsers and send the request to the destination port number of 80 on the web server. When the three requests return from the web server, they will be returning from the source port number of 80 on the web server to the destination port numbers of 1024, 1025, and 1026 on the client. The client can then pass the proper element to the web page via the incoming data on the respective port number. Once the client receives the information, both the client and server will close the session for the port and the port can be recycled. UDP port numbers will be automatically recycled after a specific period of time, because the client and server do not communicate the state of the connection (UDP is connectionless). TCP port numbers are also automatically recycled after a specific period of time, but only after the conversation is finished using the port number. TCP communicates the state of the connection during the conversation (TCP is connection-based). FIGURE 1.3 Transport client requests It is important to note a few concepts that are resonated throughout this discussion of the OSI layers. The first concept is each layer of the OSI communicates with the same layer on the other host—this is called peer-layer communications. The second concept is that every layer communicates with the layer above and the layer below. The Transport layer performs this communication to the layer above with the use of a port number. The Transport layer communicates with the layer below by moving information down to the network layer from either the TCP or UDP protocol. In the next section, you will learn how this information is conveyed and used by the Network layer. Network Layer The Network layer (layer 3) is responsible for the logical numbering of hosts and networks. The Network layer is also responsible for transporting data between networks through the process of routing. Routers operate at the network layer to facilitate the movement of packets between networks; therefore, routers are considered layer 3 devices. Figure 1.4 details three networks that are logically numbered with IP addresses, each belonging to a unique network. We will explore network routing in Chapter 2, “Domain 2.0: Network Implementations,” in the section “Compare and contrast routing technologies and bandwidth management concepts” (objective 2.2). FIGURE 1.4 Logical network addressing The IP protocol is not the only protocol that functions at this layer; ICMP also functions at the Network layer. There are many other Network layer protocols, but for the remainder of this discussion of objective 1.1 we will focus on the IP protocol. The IP protocol at the Network layer communicates with the layer above by using a protocol number. The protocol number at the Network layer helps the IP protocol move the data to the next protocol. As you can see in Figure 1.5, when the protocol number is 6, the data is decapsulated and delivered to the TCP protocol at the Transport layer. When the protocol number is 17, the data is delivered to the UDP protocol at the Transport layer. Data does not always have to flow up to the Transport layer. If the protocol number is 1, the data is moved laterally to the ICMP protocol. FIGURE 1.5 Network layer protocol numbers Data Link Layer The Data Link layer (layer 2) is responsible for the framing of data for transmission on the Physical layer or media. The Data Link layer is also responsible for the static addressing of hosts. At the Data Link layer, unique MAC addresses are preprogrammed into the network cards (computers) and network interfaces (network devices). This preprogramming of the unique MAC address is sometimes referred to as being burnt-in, but modern network interface cards (NICs) allow you to override their preprogrammed MAC address. The Data Link layer is only concerned with the local delivery of frames in the same immediate network. At the Data Link layer, there are many different frame types. Since we are focused on TCP/IP, the only frame types we will discuss are Ethernet II frame types. Switching of frames occurs at the Data Link layer; therefore, this layer is where switches operate. As shown in Figure 1.6, the Data Link layer is divided into two sublayers: the logical link control (LLC) layer and the media access control (MAC) layer. The LLC layer is the sublayer responsible for communicating with the layer above (the Network layer). The LLC sublayer is where CPU cycles are consumed for the processing of data. The MAC layer is responsible for the hardware processing of frames and the error checking of frames. The MAC layer is where frames are checked for errors, and only relevant frames are passed to the LLC layer. The MAC layer saves CPU cycles by processing these checks independently from the CPU and the operating system. The MAC layer is the layer responsible for the transmission of data on a physical level. FIGURE 1.6 The Data Link layer and the sublayers within The LLC layer communicates with the Network layer by coding a type of protocol field in the frame itself, called the Ethernet type. It carries the protocol number for which traffic is destined, as shown in Figure 1.7. You may ask whether IP is the only protocol used with TCP/IP, and the answer is no. Although TCP/IP uses the IP protocol, a helper protocol called the Address Resolution Protocol (ARP) is used to convert IP addresses into MAC addresses. Other protocols that can be found in this field are FCoE, 802.1Q, and PPPoE, just to name a few. FIGURE 1.7 The LLC sublayer and the Network layer The MAC address layer is responsible for the synchronization, addressing, and error detection of the framing. In Figure 1.8 you can see the complete Ethernet II frame with the LLC layer (type field). The frame begins with the preamble, which is 7 bytes of alternating 1s and 0s at a synchronization frequency according to the speed of the connectivity method. The start frame delimiter (SFD) is a 1-byte field and technically it's part of the preamble. The SFD contains an extra trailing bit at the end of the byte to signal the start of the destination MAC address (10101011). The preamble and SFD help the receiving side form a time reference for the rest of the frame signaling; the preamble synchronizes the physical timing for both sides of the transmission. Hence, it is the way the Data Link layer communicates with the layer below (the Physical layer). The destination MAC address is a 6-byte field and represents the physical destination of the data. The source MAC address is a 6-byte field and represents the physical source of the data. The type field is a 2-byte field, as described earlier, and is part of the LLC sublayer. The data field can vary between 46 bytes and a maximum of 1500 bytes. The frame check sequence (FCS) is a cyclic redundancy check (CRC), which is a calculation of the entire frames for error detection. If the CRC does not match the frame received, it is automatically discarded at the MAC address sublayer as invalid data. FIGURE 1.8 An Ethernet II frame A MAC address is a 48-bit (6-byte) physical address burned into the network controller of every network card and network device. The address is normally written as a hexadecimal expression such as 0D-54-0D-C0-10-52. The MAC address format is governed by, and partially administered by, the IEEE. In Figure 1.9, a MAC address is shown in bit form. The Individual Group (I/G) bit controls how the switch handles broadcast traffic or individual traffic for the MAC address. If the I/G bit in a MAC address is 0, then it is destined for an individual unicast network device. If the I/G bit in a MAC address is a 1, then the switch treats it as a broadcast or multicast frame. The Global/Local (G/L) bit signifies if the MAC address is globally governed by the IEEE or locally set by the administrator. If the G/L bit in the MAC address is a 0, then the MAC address is globally unique because it has been governed by the IEEE. If the G/L bit in the MAC address is 1, then it is locally governed— that is, it is statically set by an administrator. The organizationally unique identifier (OUI) is governed by the IEEE for each vendor that applies to make networking equipment. The I/G bit, G/L bit, and the OUI make up the first 24 bits of a MAC address. The last 24 bits are assigned by the vendor for each network controller that is produced. This is how the IEEE achieves global uniqueness of every MAC address for networking equipment. FIGURE 1.9 MAC address format The IEEE publishes the list of OUIs that have been registered for network controller production. With this list, you can determine the manufacturer of the device from the first six hexadecimal digits of a MAC address. Many protocol analyzers use this same list to translate the source and destination fields in a frame to a friendly manufacturer ID. The complete list changes daily and can be found at https://regauth.standards.ieee.org/standards-ra- web/pub/view.html#registries. The site www.wireshark.org/tools/oui- lookup.html has a friendly search function for parsing this list and returning the vendor. Physical Layer The Physical layer is responsible for transmitting the data of 1s and 0s that is passed down from the Data Link layer. The data consisting of 1s and 0s is modulated or encoded for transmission via radio waves, light, electricity, or any other physical method of transmitting data. The Physical layer is an integral component of many different types of transmission methods such as wireless (802.11), fiber optics, and Ethernet, just to name a few. In all cases, the Physical layer is tied directly to the Data Link layer, so together the Physical layer and the Data Link layer are considered a macro layer. This macro layer allows an application to transmit in the same way over an Ethernet connection as it does a wireless connection, such as when you disconnect and go wireless. Hubs and repeaters operate at the Physical layer because they are not tied to the Data Link layer—they just repeat the electrical signals. The Physical layer also defines the connection types used with the various networking technologies. The physical layer is the most common place to find problems, such as a loose connection or bad connection. A list of the different connection types and transmission media can be found in the section “Summarize the types of cables and connectors and explain which is the appropriate type for a solution” (objective 1.3). Protocol Data Units The term protocol data units (PDUs) is how we describe the type of data transferred at each layer of the OSI model. Using the proper PDUs when describing data can avoid misconceptions of problems when speaking with other network professionals. Throughout this book I will use PDUs to describe data; see if you take notice when I do refer to a PDU and try to match it with the layer it operates on. The layers of the OSI model and their corresponding PDUs can be seen in Figure 1.10. The first three layers of the OSI model (Application, Presentation, and Session) reference the components of an application. The PDU for these upper layers is considered user datagrams, or just datagrams. The datagrams are created by the application and passed to the Transport layer. The Transport layer is where segments are created from the datagrams, and then the segments are passed to the Network layer. At the network layer, packets are created from the segments and passed to the Data Link layer. The Data Link layer creates frames for transmitting the data in bits at the Physical layer. FIGURE 1.10 OSI layers and PDUs Data Encapsulation and Decapsulation Throughout this review of the OSI model, you may have noticed a running theme. Every layer has identifying information such as a port number, the TCP or UDP protocol, IP address, and a frame. Each layer communicates with the layer above and the layer below using this information. Let's review the data encapsulation process as shown in Figure 1.11. FIGURE 1.11 Encapsulation and decapsulation As data is transmitted, data encapsulation is the process of passing a PDU down to the next layer in the protocol stack. When it reaches this layer, information is written into the PDU header or type field (frame). This information explains to the current layer which upper layer the payload (data) came from; this will be important for the decapsulation process. The prior PDU is now considered nothing more than payload of data at this layer in the transmit process. As data is received, data decapsulation is the process of passing the payload (data) up to the next layer in the protocol stack. When the payload is decapsulated, the information is read from the PDU header or type field (frame). This allows the current layer to know which upper layer to pass the payload to. As the data is passed upward in the stack, it becomes a PDU again. In simple terms, if there were two buildings and a worker on the 7th floor of one building wanted to send a message to a worker in the other building on the 7th floor, the worker in the first building would write a note (datagram). This note would then be placed into an envelope and information would be written on the envelope detailing whom it came from. In addition, this envelope (segment) would have department information, such as which department it came from and which department it is going to (port numbers). The first worker would also choose the delivery company, either speedy delivery with no guarantees of delivery (UDP) or slow and steady delivery with acknowledgment of delivery (TCP). This envelope would then be placed into another envelope (packet) and on this envelope they would fill out information such as the street address the packet was coming from and going to. They would also make a note as to which delivery service was handling the message. This envelope would then go into another envelope (frame) detailing which door it came from and which door it was going to. Encapsulation is basically envelopes inside of envelopes; each envelope performs its own function at each layer in the protocol stack. When the data arrives at the destination, each envelope is opened up and the envelope inside is handed off to the next destination it is intended for. Now let's take a closer look at these headers that will hold the information describing where data came from and how to hand it off on the other side of the conversation. UDP The User Datagram Protocol (UDP) is a transport protocol for TCP/IP. UDP is one of two protocols at the Transport layer that connect network applications to the network. When application developers choose to use UDP as the protocol their application will work with, they must take several considerations into account. UDP is connectionless, which means that data is simply passed from one IP address over the network to the other IP address. The sending computer won't know if the destination computer is even listening. The receipt of the data is not acknowledged by the destination computer. In addition, the data blocks sent are not sequenced in any way for the receiving computer to put them back together. In Figure 1.12 you can see a UDP segment; the header has only a source port, destination port, length of data, and checksum field. FIGURE 1.12 UDP segment You may be wondering at this point why you would ever use UDP. We use UDP because it is faster than TCP. The application developer must make the application responsible for the connection, acknowledgment, and sequencing of data if needed. As an example, a Network Time Protocol (NTP) client uses UDP to send short questions to the NTP server, such as, What is the time? We don't need a large amount of overhead at the Transport layer to ask a simple question like that. Other protocols, such as the Real-time Transport Protocol (RTP) VoIP protocol, don't care to acknowledge segments or retransmit segments. If a segment of data doesn't make it to the destination, RTP will just keep moving along with the voice data in real time. TCP Transmission Control Protocol (TCP) is another transport protocol for TCP/IP. Just like UDP, TCP is a protocol at the Transport layer that connects network applications to the network. When application developers choose to use TCP as the protocol their applications will work with, the protocol is responsible for all data delivery. TCP has all the bells and whistles for a developer. TCP is a connection-oriented protocol. During the transmission of information, both ends create a virtual circuit over the network. All data segments transmitted are then sequenced, acknowledged, and retransmitted if lost in transit. TCP is extremely reliable, but it is slower than UDP. When the sending computer transmits data to a receiving computer, a virtual connection is created using a three-way handshake, as shown in Figure 1.13. During the three-way handshake, the window buffer size on each side is negotiated with the SYN and ACK flags in the TCP header. When both the sender and receiver acknowledge the window's size, the connection is considered established and data can be transferred. When the data transfer is completed, the sender can issue a FIN flag in the TCP header to tear down the virtual connection. FIGURE 1.13 TCP three-way handshake The buffer size negotiated in the three-way handshake determines the sliding TCP window for acknowledgment of segments during the transfer. As shown in Figure 1.14, the negotiated TCP sliding window size is 3. After three sequenced packets are delivered and put back in order on the receiving computer, the receiving computer sends an acknowledgment for the next segment it expects. If a segment is lost and the window cannot fill to the negotiated three-segment window size to send the acknowledgment, the acknowledge timer will be triggered on the receiver, and the receiver will acknowledge the segments it currently has received. The sender's retransmit timer will also expire, and the lost segment will be retransmitted to the receiving computer. This is how the sequencing and acknowledgment of segments operate with the use of TCP sliding windows. In Figure 1.15, the TCP header is shown with all the fields just described. FIGURE 1.14 TCP sliding window example FIGURE 1.15 TCP segment IP The Internet Protocol (IP) is a Network layer protocol that allows for the logical addressing of networks and hosts. The addressing of networks is the mechanism that allows routing to be used. The addressing of the hosts within the networks is the mechanism that allows end-to- end connectivity to be achieved over the network. UDP and TCP function on top of the IP protocol. UDP and TCP are protocols that handle the data for the applications. The IP protocol is responsible for encapsulating these protocols and delivering it to the appropriate addresses. At this point, you are probably imagining a letter that is folded and put into an envelope that is addressed from the sender to the destination. You would be correct—the IP protocol handles the delivery of data segments from applications in IP packets. Figure 1.16 shows the fields that are contained in an IP packet header. I will cover the most important fields as they pertain to this exam. The first 4 bits contain the version of IP; this is how IPv4 and IPv6 packets are differentiated. The priority and type of service (ToS) fields are used for quality of service (QoS) The time to live (TTL) field is used for routing so that packets are not endlessly routed on the Internet. The protocol field defines where to send the data next—UDP, TCP, ICMP, and so on. Then of course we have the source and destination IP address fields for routing to the destination computer and responding of the destination computer. Throughout this book, I will be covering TCP/IP in depth because it is the predominant protocol in all networks today. FIGURE 1.16 An IP packet MTU The maximum transmission unit (MTU) is the largest size of the data that can be transferred at the Data Link layer. The data being transferred is also known as the payload of the frame. The MTU for Ethernet is 1500 bytes. Adding 12 bytes for the destination and source MAC address, a 2-byte type field, and 4 bytes for the frame check sequence (FCS) brings the MTU to 1518 bytes. The smallest MTU is 46 bytes, or 64 bytes if including the frame fields. The MTU is often referred to as a layer 3 data size. When data is passed down to the Data Link layer, the packet is sized to the MTU of the Data Link layer. Therefore, we can consider the MTU a constraint on the Network layer. However, it is usually adjustable only at the Data Link layer, such as when you're configuring a switch port on a switch. The Ethernet specification allows for either an MTU of 1500 bytes or an MTU of 9000. When the MTU is increased to 9000 bytes, the frame is considered a jumbo frame. I will discuss jumbo frames in greater detail in Chapter 2, “Domain 2.0: Network Implementations.” Exam Essentials Understand the various layers of the OSI and how they facilitate communications on each layer. The Application layer is the beginning of the communication process with the user and is where applications are defined as client or server. The Presentation layer converts data formats, encrypts and decrypts, and provides compression and decompression of data. The Session layer is responsible for setup, maintenance, and teardown of the communications for an application as well as dialogue control. The Transport layer is responsible for flow control of network segments from the upper layers. The Network layer is responsible for the logical assignment and routing of network and host addresses. The Data Link layer is the layer responsible for the framing of data for transmission via a physical media. The Physical layer is the layer at which data is transmitted via air, light, or electricity. Know the various protocol data units. Protocol data units (PDUs) are used to describe payloads of data at each layer of the OSI model. Using the proper PDU to describe data, network professionals can avoid miscommunication while dealing with network problems. PDUs not only describe the data, they also directly describe the layer being discussed. Understand the encapsulation and decapsulation process. Encapsulation is the process in which data is passed down the protocol stack and each upper layer becomes a data payload at the next layer down along with identifying information for the decapsulation process. The decapsulation process is the reverse of the encapsulation process, taking the payload and decapsulating it back to the next layer, while passing it to the proper protocol above in the protocol stack. Know the common TCP flags used for communication. The three-way handshake uses the SYN and ACK flags to establish a virtual communication circuit between two network nodes. When an established virtual circuit is complete, an RST or FIN flag is sent that tears down the established virtual communication circuit. Understand the difference between headers for various layers of the OSI. UDP has an 8-byte header because it is a connectionless protocol and requires very few fields. The TCP header has quite a few more fields than UDP because TCP is a connection- based protocol and must sequence and acknowledge segments transmitted. Both TCP and UDP contain the element of a port number to direct information to the destination service and back to the requesting application. The IP header is used to route packets and therefore has destination and source IP address fields and other fields related to routing. In addition, it contains a protocol field that explains the payload of data and assists in handing the payload to upper layer protocols. Ethernet frames contain destination and source fields as well as a protocol field called the type field. This type field describes the payload of the frame and also assists the Data Link layer in handing the payload to upper layer protocols. 1.2 Explain the characteristics of network topologies and network types. The topology of a network defines the shape in which the network is connected. Many of the topologies I will cover are no longer relevant for networking. However, you should understand how information moves within the topology, because you will see other technologies use these topologies. The topology is a schematic of the overall network. Besides the topology of our network, sections of our network are defined by functional type such as local area network (LAN) and wide area network (WAN). In this and the following sections, you will learn about various functional types of networks. A new emerging technology called the Internet of Things (IoT) is becoming the responsibility of the network administrator. There are several networking technologies that you must understand to support IoT devices. We will explore several common networking technologies used with IoT in the following sections. Wired Topologies I'll discuss several wired topologies that are no longer used for Ethernet networking. However, that doesn't mean they are deprecated and no longer used. You will see many of these topologies in other areas of technology, such as storage area networks (SANs), industrial control systems, and WANs. Logical vs. Physical There are two main types of topologies: physical and logical. If you ever sit down to document a network and end up with a mess of lines and details, you are trying to display both the physical and logical in one drawing. The logical topology of a network should be a high-level view of the information flow through semi-generic components in your network. This shows how the network operates and should be your first drawing. The physical topology defines why it works, such as which port on the router is connected to which port on a switch, and so on. Star The star topology is currently used in networks today, and it's the main topology used to connect edge devices (end users). All network devices are wired back to a hub or switch. The computers can be next to each other or spread out across an office space, but all communication goes back to a central location. This topology has been widely adopted because it concentrates the failure and diagnostic points in a central location. Another added benefit is that we can swap out the edge switches all from the same location. A disadvantage is that if a switch fails, every device connected to the switch is affected. Many buildings will have multiple star topologies; as an example, the edge switch is wired back to a larger switch, sometimes called a core switch. In Figure 1.17, you see a typical example of a star topology. FIGURE 1.17 A typical star topology Ring Ring topology was used over 25 years ago, and it was called token ring IEEE 802.5. IBM produced a lot of the hardware used in token ring networks, operating at a maximum speed of 4 Mbps and 16 Mbps. The networked devices would pass a token around the ring; any device that could seize the token could transmit a message around the ring. In Figure 1.18, you can see a logical ring topology. Physically the computers had one wire connected, similar to networks today. The wire consisted of a ring in pair and a ring out pair. Token ring is now a deprecated technology for LAN connectivity with the IEEE 802.5 specification. However, token ring topologies are still used in industrial control system (ICS) applications. Token ring is also still used for WAN connectivity; it is used with SONET rings and Fiber Distributed Data Interface (FDDI) rings. I will cover LANs and WANs in this chapter. Token ring is still popular in WAN design because it can be designed to be resilient in the case of a failure. FIGURE 1.18 A logical ring topology Mesh The full mesh is a topology often used in data centers because it allows for redundant connection in the event of a component failure. Cloud computing uses a lot of mesh type connectivity because a failure should not hinder the customer(s). You will not see this used at the edge of a network where end-user computers connect to the network, mainly because it is too costly. If you wanted to calculate how many connections between four switches you would need to achieve a full mesh, you would use the following formula: In this example, you would need six cable connections between the switches using the formula. More importantly, you would need to have three switch ports available on each switch because each cable has two ends (6 ÷ 2 = 3). In Figure 1.19, you can see a full mesh between four network switches. If you have a failure on any cable or switch, the network will continue to function. If a switch were to go down, the failure would be isolated to the failed switch. FIGURE 1.19 A physical topology of a full mesh The Internet is not really a full mesh; it is a partial mesh. This is due to the costs of running cables to every provider on the Internet. So providers have partial meshes connecting them to upstream providers. When there is a failure on the Internet, it is usually localized to the path to the provider. Many providers have their own redundancy internally in their networks and use full meshes internally. Bus The bus concept is the most important networking concept to understand. It established the baseline for nearly all networking concepts and improvements that followed. The bus topology was common in networks 25 years ago; it is now considered legacy in its design. It used coaxial cables joining computers with BNC connectors. The reason it is deprecated is that a failure on the bus would affect all the computers on the bus. These networks also required external terminators on the ends of the bus segment. They are basically resistors; terminators stopped the reflection of electrical signals reflecting back in the direction it came from. So why are we talking about bus networks? Bus networks are how SCSI, RS-422 (industrial serial), and many other types of technologies work. It is important to understand how they work so that you can diagnose problems in these other technologies. When a computer wants to communicate on a bus network, it sends the signal out and all other computers see the message. Only the computer it is destined for by its destination MAC address processes the message and responds. SCSI disk networks use a device ID similar to how the MAC address is used on computer bus type networks. You can see this comparison in Figure 1.20. FIGURE 1.20 A comparison of bus networks to SCSI disk networks Hybrid The hybrid topology is more representative of internal networks today. Hybrid topology design combines multiple topologies for resiliency, load balancing, and connectivity. In Figure 1.21, you can see several different topologies being used to effectively create redundancy and connectivity. You can also see several types of topologies. The edge switches are connected in a star topology, the distribution switches are connected in a partial mesh, and the core and distribution are connected in a full mesh. Also notice that the WAN connectivity is being supplied by a SONET ring topology. FIGURE 1.21 A hybrid topology Types When we refer to parts of our network, we classify the section of network with a type. This designation of type helps us generalize its use and function. Consider your property; you have inside doors, outside doors, and storm doors. The inside doors serve the function of privacy. The outside doors function the same but add security. The storm doors are used for security and safety. Our network has different areas that we label with these types so that we can quickly identify the areas' purpose. The type of network also helps us plan for infrastructure that the network will serve. Client-Server When I think back to when I first started 25 years ago in networking, I remember the dominant network operating system was Novell Netware. Although it has been discontinued for almost a decade, it still serves a purpose as an example of a strict client-server network operating system. Servers were only servers and could only serve up printers or files and clients could only be clients and could not serve files or printers. Clients used the resources from the servers, such as printers or files, and servers only existed to serve the clients. Today, we have a more traditional peer networking model where clients can be both clients and servers, same as servers can be both. However, this model can still be applied to applications where clients access the resources for information. An example of this is a typical mail client that accesses the mail server. There is a strict client-server relationship with this example, where the client cannot serve mail because it was designed with one purpose, to act as a client. Same as such with the mail server; it was designed with one purpose, to act as server. These client-server models should always be identified with network applications because it will help you understand the flow of information. If a client retrieves information from a server, then the server must have ports exposed to the client; so information can be accessed. If a firewall is involved, then you may have to open ports on the firewall for client connectivity. If there is a client-based firewall, then ports may need to be opened on the client side to egress the network. Peer-to-Peer You have probably heard of peer-to-peer networking, but it has probably been in a dark manner of piracy. True enough, the first time many of us heard of peer-to-peer networking was in the light of illegal activities. However, peer-to-peer networking existed way before these activities made headlines. A peer is nothing more than a network node that can act as both a client and server at the same time. This model breaks the strict client-server model because it allows a network host to access files and printers as well as serve them simultaneously. The aspect of peer-to-peer information sharing is also applicable to network operating system functions. Many operating system functions have been adopted from the idea of peer-to-peer information sharing because it allows for decentralized functions. An example of this is a protocol called Link-Local Multicast Name Resolution (LLMNR), which is used as a peer-to- peer name resolution protocol. There is no one source of information since it is a distributed name resolution protocol where everyone is a peer to the information. LAN A local area network (LAN) defines the company's internal network. As its name infers, it is the “local area” of your network that is locally managed. As it pertains to infrastructure implementation, there should be little or no consideration for the placement of resources within the LAN. LAN speeds “should” always be the fastest within your network design and can be internally upgraded as needed. Figure 1.22 represents a typical LAN; where there is a resource local to the clients, it is considered an intranet. FIGURE 1.22 Typical LAN WLAN A wireless local area network (WLAN) is a company's internal wireless network. As LAN infers, it is the “local area” that is locally managed. The WLAN is a wireless extension of our wired local area network. As it pertains to infrastructure, we should always design the wireless network for the wireless client density it could serve. Although wireless networks are upgradable, because of the physical location of wireless access points (WAPs), such upgrades are usually costly. When designing WLANs, we should always start with a site survey to estimate the best placement of WAPs. Figure 1.23 represents a typical WLAN. The WAP extends the wired network to wireless clients. WAN A wide area network (WAN) is a network that interconnects your network location together via a provider, and as the acronym infers, it is over a “wide area.” These locations could be within the region or different regions of the world. An example of this is two locations that are connected with a point-to-point leased line within a state. As is pertains to your infrastructure implementation, a consideration is the placement of your resource within your various networks that are interconnected. This is mainly due to the fact that WAN connections usually operate at lower speeds than your internal networks. So resources should be placed closest to the users. They also could use different protocols than your internal networks do, so certain broadcast-based technologies might not work. Keep in mind that the term WAN is a generic description, and it does not pertain to any one solution. You could have leased lines, broadband, private fiber, public fiber, or any solution that connects your offices together over a wide area. Figure 1.24 represents a typical leased-line WAN; you can see that the branch office is connected to the central office via a leased-line. FIGURE 1.23 Typical WLAN FIGURE 1.24 Typical WAN MAN A metropolitan area network (MAN) is a type of WAN. It is connected over a defined geographical area and has a higher connection speed between network locations via a provider. The area could be a city or a few-block radius or region of a state. The infrastructure implementation is similar to a WAN; the difference is the speed of the connection, because a MAN is built out by the provider as the backbone for your network locations. As it pertains to your infrastructure implementation, the placement of resources is less of a concern because of the higher speeds between the locations. An example of this is a company that has a branch office in the next town. You may have no administrators at that location, so centralizing the server at the primary location is the design goal. This requires a higher speed and reliable backbone between the locations. CAN A campus area network (CAN) defines multiple buildings (LANs) that are connected together, all of which are locally managed by your company. As long as you locally manage the connections between the buildings, it is considered to be a campus area network. The CAN connects multiple LANs with a private communications infrastructure. You should always take the speed between LANs into consideration for the placement of resources. As an example, file servers should always be placed closest to the majority of users. SAN A storage area network (SAN) is the network reserved for storage access. SANs often use dedicated switching equipment to provide low latency and lossless connectivity. SANs often use redundant connections in the form of a partial mesh for fault tolerance. Because this switching equipment is dedicated, it is usually found only in the data center and is used for connecting servers to the storage. A common SAN technology found in data centers is Fibre Channel (FC). However, SANs can be made up of any technology as long as the infrastructure is dedicated for storage access. I cover SANs and SAN technologies later in this chapter in the section “Explain basic corporate and data center network architecture” (objective 1.7). In Figure 1.25, we see a typical SAN that connects four servers to two Fibre Channel switches and two storage processors. FIGURE 1.25 Typical SAN PAN A personal area network (PAN) defines an ultra-small network for personal use. If you have a smartphone that is paired to your vehicle, you probably have a PAN. Many people walk around using a PAN every day: smart watches, smartphones, and personal fitness devices transmit data back and forth. A protocol often used with PANs is Bluetooth. However, PANs can use any protocol and any media. They can be wired or wireless, as long as they enable communications for devices near the person and are used for personal access. MPLS Multiprotocol Label Switching (MPLS) is an emerging WAN technology that uses packet- switching technology. It operates by adding MPLS labels to each packet generated from the customer and switching them in the provider's network. This MPLS label allows the MPLS provider to packet-switch the data based on the label and not the layer 3 network addressing. This is why MPLS is considered to work at layer 2.5; it is not a true layer 2 protocol because it is augmented with an MPLS label. It is also not a true layer 3 protocol since the destination IP address is not used for routing decisions. This makes it an extremely efficient protocol for moving data. It is considered a packet-switched technology and can be used across many different types of connectivity technologies, such as SONET, Ethernet, and ATM, just to name a few. The key takeaway is that an underlying leased line is required for MPLS to operate. MPLS is a great connectivity method for branch offices to a centralized corporate office. Cost is increased with the addition of each branch office, so at some point these lease lines become uneconomical. SDWAN Software-defined wide area network (SDWAN) is another emerging WAN technology. The centralized approach of bringing all network communications from branch offices back to the centralized corporate office was fine when all the resources were in a centralized location. For example, a typical branch office user would store their document on a server back at the corporate office where all the servers were located. This model just made sense for the past 25+ years. However, with the adoption of cloud-based services such as Microsoft 365, Salesforce, and Amazon Web Services, our office workers only need access to the Internet for connectivity and their documents are stored in the cloud. This can present new challenges, such as how do we police certain applications and assure bandwidth for other applications? This is where SDWAN solves a lot of problems. All network communications use the layers of the OSI model, but depending on the purpose of the communications is what defines the plane of communications. The three basic planes are the data plane, where data is moved; the control plane, where data flow is controlled; and the management plane, where the administrator manages the control plane. SDWAN decouples the control plane from branch routers and centralizes the control plane at the SDWAN controller. Now in lieu of Border Gateway Protocol (BGP) and Open Shortest Path First (OSPF) routing protocols deciding how to route a packet, the SDWAN control now decides based upon congestion or application response. This is something that traditional protocols just can't do, because the control protocols are based upon path and getting packets routed fasted and not which server in the cloud responds quicker. This control allows an administrator to centrally control policies for branch office routers to prioritize and route traffic over an existing Internet connection or leased line. The administrator can prioritize Voice over IP (VoIP) traffic over basic web surfing and differentiate web surfing from web-based line-of-business applications. Because SDWAN is application aware, it can differentiate between application traffic and intelligently control the flow of information over an Internet connection. SDWAN can also be combined with traditional technologies such as virtual private networks (VPNs) to maintain access to centralized resources back at a corporate office. Generic Routing Encapsulation (GRE) is a protocol used to create a virtual tunnel over the Internet or an internetwork. The GRE protocol only creates a tunnel between two routed points; it does not provide encryption. GRE is used in conjunction with encryption protocols to provide security over the tunnel. In practice, it is used all the time to create point-to-point virtual tunnels on the Internet. The Internet Protocol Security (IPSec) protocol is then employed to encrypt the transmission over the tunnel. The problem with GRE tunnels is that a tunnel must be built for each endpoint and between endpoints. This isn't much of a problem if you have a few locations, but once you get more than a few routers it becomes extremely difficult to manage. In Figure 1.26 we see a corporate office with multiple branch offices distributed across the Internet. Multipoint Generic Routing Encapsulation (mGRE) solves problems with scale and complication of configuration. The mGRE protocol allows an administrator to configure multiple GRE paths throughout the enterprise; it also allows branch offices to create logical tunnels between each office. Keep in mind that you still have to encrypt the traffic over these tunnels for privacy, but that is the easy part once the tunnels are configured. FIGURE 1.26 Example of multiple GRE tunnels Service-Related Entry Point The service-related entry point is yet another characteristic of service. It defines the point in a network that a provider terminates their responsibility for their service and it becomes the customer's responsibility. It also defines how the service is handed off to the customer, sometimes referred to as the handoff. Many different WAN technologies can hand off the Internet connection in several different methods to the customer. The most common is Ethernet, but if distance is a factor, then fiber optic maybe specified in the buildout of services. Wireless can also be an option, when wiring is too costly or impossible due to terrain. All of these handoffs are outlined in the buildout of the services from the provider to the customer; it is generally a component of the initial setup costs. However, the buildout costs can sometimes be absorbed into the monthly reoccurring costs of service over the length of the contract for services. No matter what method is chosen, the other side of the connection containing the customer premises equipment (CPE) is your responsibility. This is the sole function of the termination —it terminates the provider's responsibility for equipment and signaling. Demarcation point Demarcation point, often referred to as the demarc, is terminology used with lease lines and telephone equipment. Copper phone lines are considered legacy connections today because of cheaper alternatives from cable providers and fiber to the premise providers. However, back when the phone company ran a copper phone line to your dwelling, you were responsible for all internal wiring. The telephone company would install a box on the outside of your house that would segment your internal wiring from the telephone company's local loop wiring. This box was called the network interface connection (NIC), and it was the demarcation point for the phone company. The phone technician would pull a jumper on the NIC and test the phone connectivity. If it worked fine, then the problem was inside your dwelling and was your responsibility. Of course, they would be happy to fix it for a cost! Today, with many other connectivity options such as broadband cable and fiber to the premises, the demarc has become the equipment that hands off service to the customer. The technician will disconnect the rest of your network and test basic Internet connectivity. Most problems can even be diagnosed from the home office of the provider, since all of the equipment has built-in diagnostics to reduce the number of technicians dispatched. Leased lines like T1s and Integrated Services Digital Network (ISDN) lines have a mechanical jack called an RJ-48X (registered jack) at the patch panel. When the RJ-48 is removed, a shorting block bridges the connection to create a loopback. These mechanical jacks have largely been replaced with a device called a smart jack, which I will cover after the following section on CSUs/DSUs. CSU/DSU Channel service units/digital service units (CSUs/DSUs) are devices that convert serialized data such as T1 and ISDN to a serial protocol compatible with routers. The CSU/DSU sits between the router and the leased line circuit. In the past, it was a separate piece of equipment, but many newer routers have CSUs/DSUs built in. The CSU/DSU will handle the receipt of clocking data from the data communication equipment (DCE) on the provider's network. The clocking data helps the CSU/DSU convert the channelized data into digital data so that data terminal equipment (DTE) such as a router can understand the data. The CSU/DSU also helps convert data back into serialized data when the router (DTE) sends information on the provider's network (DCE). The CSU/DSU uses the RJ-48C universal service order code (USOC) to connect to the provider's demarcation point. The CSU/DSU is considered part of the customer premises equipment (CPE), so it is the customer's responsibility. The router side of the CSU/DSU generally has connections for RS-232 or V.35, and in most cases the CSU/DSU is built into the router. Smart Jack Smart jacks are normally used with leased line circuits such as T1 and ISDN. The smart jack is a diagnostic point for the provider and is generally the demarcation point. It has largely replaced the RJ-48C electromechanical jacks. Smart jacks allow the provider to convert protocols and framing types from the provider's network. The router still requires a CSU/DSU, but the smart jack can change the framing type the CSU/DSU expects. The smart jack also offers advanced diagnostic capabilities to the provider. Smart jacks allow the provider to put the circuit into a loopback mode. This loopback mode enables the provider to diagnose signal quality and error rates. The smart jack also offers alarm indication signaling so the provider can determine whether the problem is on their premises equipment or the customer's. This alarm indication signaling enables the provider to dispatch technicians when the problem is discovered. Virtualization Before virtualization became a mainstream standard, applications had a one-to-one relationship with servers. When a new application was required, we purchased server hardware and installed an operating system along with the application. Many of these applications never used the full resources of the servers they were installed on. Virtualization solves the problem of acquisition of server hardware and applications not fully utilizing server hardware. Virtualization allows for the partitioning of server hardware with the use of a hypervisor by enabling each virtual machine (VM) to use a slice of the central processing unit (CPU) time and share random access memory (RAM). We now have a many- to-one relationship with applications to servers. We can fit many applications (operating systems) onto one physical server called physical host hardware. Each operating system believes that it is the only process running on the host hardware, thanks to the hypervisor. Virtual Networking Components So far we have covered components that support physical infrastructure. A virtualized infrastructure uses networking components similar to the physical components you have learned about already. You should have a good working knowledge of virtualized networking components, since virtualization is here to stay and is growing rapidly. Virtual Switch A virtual switch (vSwitch) is similar to a physical switch, but it is a built-in component in your hypervisor. It differs in a few respects; the first is the number of ports. On a physical switch, you have a defined number of ports. If you need more ports, you must upgrade the switch or replace it entirely. A virtual switch is scalable compared to its physical counterpart; you can just simply add more ports. The virtual switch also performs the same functions as a physical switch, with the exception of how the MAC address table is handled. The virtual switch only cares about the MAC addresses of the VMs logically attached. It doesn't care about everything else, since all other MACs can be sorted out after forwarding the frame to a physical switch. When a physical switch doesn't know the port a MAC address is associated with, it floods the frame to all the active ports. If the MAC address is unknown, the virtual switch will forward it to a physical switch via the uplink port and allow the physical switch to forward the frame. This is how we can achieve low latency switching on a hypervisor virtual switch. Virtual NIC The virtual network interface card (vNIC) is just like any other virtualized hardware in the VM. The vNIC is a piece of software that pretends to be physical hardware. It communicates directly between the VM and the virtual switch. The virtual NIC is usually generic hardware that is installed in the VM. Examples are the DEC 21140 NIC and the Intel E1000 NIC. Some hypervisors also have more advanced cards that support unique features such as VMware's VMXNET3 NIC card. The VMXNET3 NIC can support IPv6 TCP segment offloading (TSO), direct paths into the hypervisor's I/O bus for performance, and 10 Gbps data rates. These virtual NICs require the VMware drivers since they are not generic hardware presented to the VMs. Hyper-V has a virtual NIC called a synthetic NIC; the NICs allow for similar functionality with features such as IPv6 TSO, single-root I/O virtualization (SR-IOV), direct ties into the Hyper-V VMBus, and 10 Gbps data rates. It too requires the VM to install the guest services software. Network Function Virtualization (NFV) Network functions such as firewalls and routing can all be virtualized inside the hypervisor. They operate just like their physical versions, but we don't have to worry about power supplies failing, CPUs going bad, or anything else that can cause a physical network device to fail. We do have to worry about the host that runs the virtual network functions; however, redundancy is built into many hypervisors. Personally, I prefer to virtualize as many functions as I can possibly virtualize. Virtual Firewall A virtual firewall is similar to a physical firewall. It can be a firewall appliance installed as a virtual machine or a kernel mode process in the hypervisor. When installed as a firewall appliance, it performs the same functions as a traditional firewall. In fact, many of the traditional firewalls today are offered as virtual appliances. When virtualizing a firewall, you gain the fault tolerance of the entire virtualization cluster for the firewall—compared to a physical firewall, where your only option for fault tolerance may be to purchase another unit and cluster it together. As an added benefit, when a firewall is installed as a virtual machine, it can be backed up like any other VM and treated like any other VM. A virtual firewall can also be used as a hypervisor virtual kernel module. These modules have become popular from the expansion software-defined networking (SDN). Firewall rules can be configured for layer 2 MAC addresses or protocol along with tradition layer 3 and layer 4 rules. Virtual firewall kernel modules use policies to apply to all hosts in the cluster. The important difference between virtual firewall appliances and virtual firewall kernel modules is that the traffic never leaves the host when a kernel module is used. Compared to using a virtual firewall appliance, the traffic might need to leave the current host to go to the host that is actively running the virtual firewall appliance. Virtual Router The virtual router is identical to a physical router in just about every respect. It is commonly loaded as a VM appliance to facilitate layer 3 routing. Many companies that sell network hardware have come up with unique features that run on their virtual routing appliances; these features include VPN services, BGP routing, and bandwidth management, among others. The Cisco Cloud Services Router (CSR) 1000v is a virtual router that is sold and supported by cloud providers such as Amazon and Microsoft Azure. Juniper also offers a virtual router called the vMX router, and Juniper advertises it as a carrier-grade virtual router. Hypervisor The virtual networking components would not be virtualized if it weren't for the hypervisor. The hypervisor sits between the hardware or operating system and the VM to allow for resource sharing, time sharing of VMs to the physical hardware, and virtualization of the guest operating systems (VMs). The hardware that the hypervisor is installed on is called the host, and the virtual machines are called guests. There are three different types of hypervisors, as shown in Figure 1.27. FIGURE 1.27 Hypervisor types A Type 1 hypervisor is software that runs directly on the hardware; its only purpose is to share the hardware among VMs running as the guest operating system. This concept is not as new as you might think. IBM offered mainframes that perform this partitioning of hardware as early as 1967! Examples of Type 1 hypervisors are Xen/Citrix XenServer, VMware ESXi, and Hyper-V. Although Hyper-V fits into the third category of hypervisors, it is still considered a Type 1 hypervisor. A Type 2 hypervisor is software that runs on the host operating system. It runs as a process in the host operating system. Despite what you may think, Type 2 hypervisors do talk directly to the CPU via Intel VT or AMD-V extensions, depending on which vendor you are using. Memory utilization is similar to CPU utilization, but the host operating system parlays the requests via Direct Memory Access (DMA) calls. All other hardware is proxied through the host operating system. Examples of Type 2 hypervisors are VMware Workstation, VirtualBox, Parallels for macOS, and the open-source QEMU. Hybrid hypervisors are a bit different than Type 1 or Type 2 hypervisors. They function outside of the norm of cloud computing hypervisor models. They require a host operating system but function as a Type 1 hypervisor. As an example, Hyper-V requires the Microsoft operating system to be installed, but the host operating system is a guest called the parent partition. It is treated the same as guest or child partitions, but it is required for management of the hypervisor. Examples of hybrid hypervisors are Linux Kernel–based Virtual Machine (KVM), FreeBSD bhyve (pronounced beehive), and Microsoft Hyper-V. Service Type The service type defines the service from the provider, also known as the provider link. For example, broadband cable is a cable company service type, and DSL is a phone company service type. There are many different service types well beyond those covered in the following sections. However, these are the most common service types that you will see for WAN connectivity service offerings. Leased-Line Leased-lines where the most popular service type 25 years ago. You might wonder why they would be covered in the Network+ exam if they are so old? It's because they serve a purpose and newer leased-line technologies like MPLS can be overlaid on top of these service types. ISDN Integrated Services Digital Network (ISDN) is a useful service for voice calls, but it's not that useful for data. You will probably never use it for data services, and if you run into it, you will probably be migrating away from it. It is a popular connectivity technology for phone systems, like private branch exchanges (PBXs). You may have to interface with a PBX for integrated voice services someday. ISDN is still used today by phone service providers. It is deployed in two different modes: Basic Rate Interface (BRI) and Primary Rate Interface (PRI). PRI, which I will cover later, is the most common implementation. T1/T3 A T1, or tier 1, of service is sometimes referred to as a DS-1, or Digital Service tier 1. You specify the tier of service that you require when ordering service as a T1. The T1 provides 1.544 Mbps of bandwidth. A T1 is a group of 24 channels of serial data. Think of the T1 as a conveyor belt consistently moving from one location to another. On the conveyor belt there are 24 buckets, and each bucket is a channel of data (DS0). We use a special device called a channel service unit/data service unit (CSU/DSU) to convert the channels back into a stream of data. If each channel is 64 Kbps and we have 24 channels, in total we have 1.544 Mbps of bandwidth. Channels can be used for voice or data. We can even divide T1 so that some of the channels are for the PSTN and some are for data. We can even purchase only a few channels of data; this is called a fractional T1. A T3, or tier 3, of service is sometimes referred to as a DS-3. It is the next step up from a T1 when you need more bandwidth. You may be wondering what happened to the T2. It existed at one point, but T1 and T3 became the popular ordering standard. A T3 is 28 T1 connections, or 672 DS0 channels, combined together to deliver 44.736 Mbps of bandwidth. E1/E3 An E1 is only common in Europe and interconnections to Europe. It too works by channelizing data in 64 Kbps buckets, the same as a T1. However, it has 32 channels. This gives us 32 channels of 64 Kbps, for a total of 2.048 Mbps. An E3 is the European standard and consists of 16 E1 connections, or 512 DS0s, combined together to deliver 34.368 Mbps of bandwidth. PRI Primary Rate Interface (PRI) is an ISDN circuit, and it can be used for voice and data. When you purchase an ISDN circuit, you basically purchase a T1 lease line with ISDN signaling. A T1 has 24 channels of 64 Kbps. The ISDN functions by using one of the channels as a control channel called the D (delta) channel. The other 23 data channels are called the B (bearer) channels; this is sometimes noted in shorthand as 23B + D. The D channel will control call setup, and the B channels will carry data or voice calls. Twenty-three channels at 64 Kbps is 1472 Kbps (1.472 Mbps) of bandwidth. This is how ISDN excels when it is used for voice communications, since the D channel communicates call information for the other 23 channels to both ends (provider and PBX). In doing this call setup, it avoids something called call collisions. Call collisions happen when a call is coming in and going out on the same channel. It is a popular technology for voice but not for data. OC3-OC1920 The OC stands for optical carrier, since these services are delivered over fiber-optic cables. They still have channelized data and require a CSU/DSU—it just happens to be delivered over a fiber cable via a SONET ring. An OC1 has a speed of 51.84 Mbps. Unfortunately, there is some overhead in an OC1, which takes usable bandwidth to approximately 50 Mbps. We use the 51.84 Mbps when calculating OC speeds. An OC3 is three OC1s combined together to supply approximately 150 Mbps of bandwidth. An OC12 is 12 OC1s combined together to supply approximately 600 Mbps of bandwidth. You can see how the OCs are calculated. An OC-1920 is 1920 OC1s combined together to supply 100 Gbps, which is currently the top speed of optical carriers. DSL Digital Subscriber Line (DSL) uses copper phone lines to transmit data and voice. These lines are already running to your house or business, which is why telephone providers (POTS) became ISPs. The provider will have a piece of equipment called a DSL Access Multiplexer (DSLAM) at the local central office (CO) where your phone line is wired for dial tone. The DSLAM is between the POTS in the CO and your house or business (premise). The DSLAM communicates with the modem at your premise by using the frequencies above 3400 hertz. The POTS system filters anything above 3400 hertz, which is why music sounds terrible over a phone call. Filters are placed on the existing phones at your premise, so your calls do not interrupt data communications and your voice calls are not disturbed with the modem's screeching of data. Figure 1.28 shows a typical DSL connection and its various components. FIGURE 1.28 A DSL network ADSL Asymmetrical Digital Subscriber Line (ADSL) is the most common DSL offering to home and small business. The download speed is asymmetrical to the upload speed. ADSL has a typical download rate of 10 Mbps and an upload speed of 0.5 Mbps (512 Kbps). The upload speed is usually 1/20th of the download speed. Although this connectivity method has a decent download speed, you will be limited by the upload speed. ADSL is good for users who require Internet access for web surfing, but it is not the ideal technology for hosting services and servers. SDSL Symmetrical Digital Subscriber Line (SDSL) is a common DSL offering for small business. The download speed is similar to the upload speed: 1.5 Mbps. SDSL is comparable with T1 leased lines, which is relatively slow for most businesses today. SDSL is cheaper in comparison to leased lines, so for many businesses that do not require high speed, it is a good option. VDSL Very-high-bitrate Digital Subscriber Line (VDSL) is today's replacement for ADSL and SDSL, and it lacks speed. VDSL can supply asymmetrical speeds of 300 Mbps download and 100 Mbps upload, or symmetrical speeds of 100 Mbps download and 100 Mbps upload. Just like ADSL and SDSL, it can handle these data speeds across the same phone lines you use to make phone calls. Metropolitan Ethernet Metropolitan Ethernet, sometimes referred to as Metro-E or Metro-optical, is an emerging technology that allows service providers to connect campus networks together with layer 2 connectivity. This technology allows for the network over a large area to act like a LAN. The provider achieves this by building Ethernet virtual connections (EVCs) between the campus networks. The customer can purchase point-to-point EVCs between two locations, or multipoint-to-multipoint EVCs between several locations, to create a full meshed network. Metro-E can also provide this connectivity over many different connectivity technologies, such as leased lines, ATM, SONET, and so on. Metro-E is an extremely flexible connectivity technology that is cost effective and easy to configure, since it acts like a giant switch between network campuses. Broadband Cable Cable companies introduced Internet access on their cable infrastructure over 20 years ago. It was this existing cable infrastructure at the time that allowed cable companies to become ISPs. Today broadband cable is available almost anywhere in metro areas and surrounding suburban areas. Broadband cable operates on a specification called Data Over Cable Service Interface Specification (DOCSIS), through the use of a DOCSIS modem, sometimes referred to as a cable modem. It can typically deliver 300 Mbps download and 100 Mbps upload speeds. A cable modem communicates over coax lines that are run to your house or business and lead back to a fiber-optic node (see Figure 1.29). The fiber-optic node is a device in your area that converts coax communications to a fiber-optic line that ultimately leads back to the head end. The head end is the cable company's router and distribution of its Internet connection. One disadvantage is the shared coax line that leads back to the fiber node. Congestion and interference on this shared coax line can degrade services and speed for everyone in your service area. FIGURE 1.29 The broadband cable network Dial-up Dial-up uses modems on the public switched telephone network (PSTN) using a plain old telephone service (POTS) line. It has a maximum theoretical speed of 56 Kbps with the V.92 specification, although North American phone systems limited speeds to 53 Kbps. Dial-up is too slow to browse the web, but it is extremely useful for out-of-band management of routers, switches, and other text-based network devices. All you need is a phone line and you can dial in to the device. You may ask why you need it, if you have an IP address configured on the device. It is often used if the device loses connectivity from the Internet or network and is too far away to drive to. You can just dial in to troubleshoot it. Dial-up is a backup control for network outages since it uses the PSTN network for connectivity. Satellite Satellite communications allows unidirectional and bidirectional communications anywhere there is a line of site to the earth's equator. There is a group of satellites about 22,000 miles above the equator in a geosynchronous orbit used for communications. If you have a satellite dish, you are pointed to one of these satellites. In a unidirectional setup, you can receive video, voice, music, and data, but you cannot send information back. Your satellite dish operates in this mode of communication. It is also popular for command and control situations where first responders need to only view camera feeds and data such as weather. In a bidirectional setup, you can also send data back through the use of a very small aperture terminal (VSAT), which is a dish that can transmit and receive data. Although this technology sounds amazing, there are some issues such as the transmission distance and the speed of light at about 186,000 miles per second, which is how fast your transmission travels. There are four transmissions that need to traverse the distance between you and the satellite and the satellite and the provider (see Figure 1.30). You first send your request to the satellite; then the satellite relays it to the provider, the provider replies back to the satellite, and the satellite replies back to you. So although it is a great technology for remote locations, the delay can make real-time protocols such as VoIP very difficult. FIGURE 1.30 A typical satellite network In recent news, SpaceX started launching satellites for a service called Starlink. Although it is currently in beta in the United States, it is scheduled to be released globally in the future. The service boasts a low-latency connection to the Internet for the consumer and really fast speeds. It achieves this by maintaining a very low earth orbit and lots of satellites because of the curvature of the earth and line of sight. Service Delivery Regardless of which type of Internet provider you select, the provider will hand off service to you with one of three methods; copper, fiber optic, or wireless. You should be familiar with these methods, their uses, and their limitations. In the discussion of objective 1.3, I will cover connectivity methods and standards much more thoroughly. Copper Copper cable is a popular handoff from the provider when the network equipment is within 100 meters or less from the provider's termination point. The various services that copper is used with include leased lines, broadband cable, DSL, and dial-up. Metropolitan Ethernet services can be ordered as either a copper or fiber handoff from the provider. Copper has limited distance and speed, so fiber handoffs from the provider are more common. Fiber Fiber-optic cable (fiber) is used to provide extremely fast connectivity for long distances. Typical speeds of 10, 40, and 100 Gbps are transmitted on fiber, but higher speeds can be achieved. Distances will vary with the speed and type of cable being used; the typical range can be 150 meters to 120 kilometers (75 miles). Fiber comes in two variations from the service provider: lit fiber and dark fiber. Lit fiber, also called managed fiber, is similar to Verizon's FiOS service. The provider is responsible for installing the fiber cable and for the equipment and maintenance on each end. Dark fiber is just a piece of fiber from one location to another, and the customer is responsible for lighting it and maintaining it. Dark fiber is used inside the network campus, and it can also be used for WAN connectivity. Dark fiber is the cheaper option after the upfront cost for equipment. Fiber is used to deliver several of the services covered in this chapter. Wireless Wireless transmission mediums are normally used when cabling cannot be accomplished or is too expensive. An example of this is Internet connectivity for ships and planes; other examples are remote locations in mountainous terrains. Some services are exclusively delivered via wireless. Worldwide Interoperability for Microwave Access (WiMAX) is a connectivity technology similar to Wi-Fi in respect to delivering Internet over wireless. It is defined by the IEEE as 802.16 and operates on 2 GHz to 11 GHz and 10 GHz to 66 GHz. It can be used line of sight or non–line of sight when there are obstructions such as trees. The service provider will mount a WiMAX radio on a tower, similar in concept to cellular communications. The WiMAX tower can cover areas as large as 3,000 square miles (a 30-mile radius). This allows rural areas, where running dedicated lines is impossible, to have Internet connectivity. Subscribers need either a WiMAX card in their computer or a WiMAX router to connect to the tower. When WiMAX originally launched, it was capable of delivering speeds of 40 Mbps; it can now deliver speeds up to 1 Gbps. It is commonly used by many cellular providers to backhaul cellular traffic from remote cell towers. Exam Essentials Know the various wired topologies. Logical topologies provide a high-level overview of the network and how it operates. The physical topology is a more detailed view of the network and why it can operate. Star topologies are used for Ethernet networks. Ring topologies are used for WAN connectivity. Mesh topologies are commonly found in the core of the network. Bus topologies are no longer used for Ethernet, but bus topologies can be found in many other technologies. Know the various types of networks. A local area network (LAN) is the locally managed network. The wireless local area network (WLAN) extends the LAN for wireless capabilities. The wide area network (WAN) allows a site to get access to another site or Internet access. The metropolitan area network (MAN) is a type of WAN that is constrained to a metropolitan area. The campus area network (CAN) is a relatively small area that is locally managed. The storage area network (SAN) is exclusively used for connecting to storage. The personal area network (PAN) is a network that is for personal use. Know the function and understand the fundamentals of virtual networking components. A virtual switch functions similarly to a physical switch, except for the difference of how the MAC addresses are handled. You can install virtual firewalls as a virtual appliance, and some virtualization software offers a kernel mode firewall in the hypervisor. The virtual NIC is a software emulated generic network card in the guest operating system. Virtual routers are similar to hardware routers. Hypervisors allow the hardware resources to be shared among virtual machines. Know the various service types of WAN technologies. ISDN PRI operates on a T1 leased line and reserves one of the 24 channels for call setup. T1 lines are point-to-point serial connections with a speed of 1.544 Mbps. E1 lines are similar in function to a T1 and are used mainly in Europe. T3 lines consist of 28 T1 connections. E3 lines consist of 16 E1 connections. Optic carriers (OCs) are based off an OC1 at around 50 Mbps. Metropolitan Ethernet is a WAN technology. Broadband cable uses a coaxial network to communicate back to a fiber node that is wired to the head end at the cable company. Dial-up is a legacy technology. SDWAN is a routing technology that is application aware. MPLS is a packet forwarding technology that is used for WAN connectivity. mGRE is a protocol that allows multiple GRE tunnels to be set up for scalability. Know the various termination points of provider services. The demarcation point is the end of the provider's responsibility. The CSU/DSU converts channelized serial data from the provider's network to digital serial data for the customer premises equipment. The customer premises equipment is usually the customer's router. The smart jack enables the provider to remotely diagnose a leased line connection. 1.3 Summarize the types of cables and connectors and explain which is the appropriate type for a solution. In the discussion of this objective, I will cover the common cabling, connectors, termination points, and wiring specifications involved in connecting a network together. Over your career as a network professional, you will need to deploy the proper cabling for a given network connectivity scenario. At the end of this discussion, you should be able to describe the practical application of the cabling, connectors, termination points, and specifications for a network design. Media Types When wiring a network, you will have two main media types: copper cabling and fiber-optic cabling. The decision between the two is based on a number of factors that I will detail in the following sections. After the selection of the appropriate cable type for the network design, there are several different specifications of these cables that we will cover later. Copper As a network professional, you will be responsible for identifying cabling, diagnosing cabling problems, and ordering the proper cabling for the installation required. Coaxial cable is not used for networking anymore, but you should be able to identify and understand its practical application. UTP Unshielded twisted-pair (UTP) is the most common cabling for Ethernet networks today, and it is the least expensive option for cabling a network. It is unshielded from electromagnetic interference (EMI), so the placement of cables in a network should avoid EMI sources. UTP should always be cabled away from electrical lines and non-network cabling. Because of the lack of shielding, electrical lines can induce erroneous electrical signals if the cables are run in parallel with electrical lines. UTP cable has a PVC or Teflon cable jacket, as shown in Figure 1.31; inside are four pairs of wires (eight conductors). Each of the four pairs has a specific number of twists per inch. I will cover the category specification that defines speed in relation to the twists and how the pairs are separated in a later section, “Copper Cabling Standards.” STP Shielded twisted-pair (STP) is commonly used in industrial settings, where electromagnetic interference (EMI) can induce erroneous data into the cable. STP cables should be used when running network cables around or near large motors, welding equipment, HVAC equipment, high-voltage lighting, and so on. There are several different types of STP cable depending on the severity of EMI. The most common STP cable consists of a PVC or Teflon jacket as well as a metal weaved shielding that protects the four pairs of twisted wires, as shown in Figure 1.32. Depending on the application, the individual pairs may have foil shielding as well. The cabling is significantly more expensive in price than UTP and more difficult to install because of the Ethernet jack shielding and RJ-45 shielding required. FIGURE 1.31 A common UTP cable FIGURE 1.32 A common STP cable When installing cable in an industrial setting such as a factory where cabling is exposed to vibrations, chemicals, temperature, and EMI, the MICE (Mechanical, Ingress, Climatic, Chemical, and Electromagnetic) classification should be followed. The standard is defined in an ISO/IEC (International Organization for Standardization/International Electrotechnical Commission) publication. It is best to engage an engineer to define the type of cabling to use when in doubt for an industrial setting because safety can be compromised. Coaxial Coaxial cable is no longer used in networks today for Ethernet communications on the local area network (LAN). Coaxial cable is still used for security cameras and broadband cable networks. A coaxial cable contains a solid core wire th

Use Quizgecko on...
Browser
Browser