🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Networking.docx

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Transcript

Looking around most homes or offices today, it’s hard to imagine a world without networks. Nearly every place of business has some sort of network. Wireless home networks have exploded in popularity in the last decade, and it seems that everywhere you go, you can see a dozen wireless networks from y...

Looking around most homes or offices today, it’s hard to imagine a world without networks. Nearly every place of business has some sort of network. Wireless home networks have exploded in popularity in the last decade, and it seems that everywhere you go, you can see a dozen wireless networks from your smartphone, tablet, or laptop. It didn’t used to be that way. Even when not thinking about networks, we’re still likely connected to one via the ubiquitous Internet-enabled smartphones in our pockets and purses. We take for granted a lot of what we have gained in technology over the past few years, much less the past several decades. Thirty years ago, if you wanted to send a memo to everyone in your company, you had to use a photocopier and interoffice mail. Delivery to a remote office could take days. Today, one mistaken click of the Reply All button can result in instantaneous embarrassment. Email is an example of one form of communication that became available with the introduction and growth of networks. This chapter focuses on the basic concepts of how a network works, including the way it sends information, the hardware used, and the common types of networks you might encounter. It used to be that in order to be a PC technician, you needed to focus on only one individual (but large) computer at a time. In today’s environment, though, you will in all likelihood need to understand combinations of hardware, software, and network infrastructure in order to be successful.  If the material in this chapter interests you, you might consider studying for, and eventually taking, CompTIA’s Network+ exam. It is a non-company-specific networking certification similar to A+ but for network-related topics. You can study for it using Sybex’s CompTIA Network+ Study Guide, by Todd Lammle (Sybex, May 8, 2018) materials, available at your favorite online bookseller. Understanding Networking Principles Stand-alone personal computers, first introduced in the late 1970s, gave users the ability to create documents, spreadsheets, and other types of data and save them for future use. For the small-business user or home-computer enthusiast, this was great. For larger companies, however, it was not enough. Larger companies had greater needs to share information312between offices and sometimes over great distances. Stand-alone computers were insufficient for the following reasons: Their small hard-drive capacities were insufficient. To print, each computer required a printer attached locally. Sharing documents was cumbersome. People grew tired of having to save to a floppy and then take that disk to the recipient. (This procedure was called sneakernet.) There was no email. Instead, there was interoffice mail, which was slow and sometimes unreliable. To address these problems, networks were born. A network links two or more computers together to communicate and share resources. Their success was a revelation to the computer industry as well as to businesses. Now departments could be linked internally to offer better performance and increase efficiency. You have probably heard the term networking in a social or business context, where people come together and exchange names for future contact and access to more resources. The same is true with a computer network. A computer network enables computers to link to each other’s resources. For example, in a network, every computer does not need a printer connected locally in order to print. Instead, you can connect a printer to one computer, or you can connect it directly to the network and allow all the other computers to access it. Because they allow users to share resources, networks can increase productivity as well as decrease cash outlay for new hardware and software. In the following sections, we will discuss the fundamentals of networking as well as the types of networks that you are likely to encounter. Understanding Networking Fundamentals In many cases, networking today has become a relatively simple plug-and-play process. Wireless network cards can automatically detect and join networks, and then you’re seconds away from surfing the web or sending email. Of course, not all networks are that simple. Getting your network running may require a lot of configuration, and one messed-up setting can cause the whole thing to fail. Just as there is a lot of information you should know about how to configure your network, there is a lot of background information you should understand about how networks work. The following sections cover the fundamentals, and armed with this information, you can then move on to how to make it work right. Network Types The local area network (LAN) was created to connect computers in a single office or building. Expanding upon that, a wide area network (WAN) includes networks outside the local environment and can also distribute resources across great distances. Generally, it’s safe to think of a WAN as multiple, disbursed LANs connected together. Today, LANs exist in many homes (wireless networks) and nearly all businesses. WANs are fairly common too, as businesses embrace mobility and more of them span greater distances. Historically, only larger corporations used WANs, but many smaller companies with remote locations now use them as well. 313 Having two types of network categories just didn’t feel like enough, so the industry introduced three more terms: the personal area network, the metropolitan area network, and the wireless mesh network. The personal area network (PAN) is a very small-scale network designed around one person within a very limited boundary area. The term generally refers to networks that use Bluetooth technology. On a larger scale is the metropolitan area network (MAN), which is bigger than a LAN but not quite as big as a WAN. The most recent designation created is the wireless mesh network (WMN). As the name indicates, it’s a wireless network, and it uses what’s known as a mesh topology. We’ll cover both of those concepts in more detail later in this chapter. It is important to understand these concepts as a service professional because when you’re repairing computers, you are likely to come in contact with problems that are associated with the computer’s connection to a network. Understanding the basic structure of the network can often help you solve a problem. LANs The 1970s brought us the minicomputer, which was a smaller version of large mainframe computers. Whereas the mainframe used centralized processing (all programs ran on the same computer), the minicomputer used distributed processing to access programs across other computers. As depicted in Figure 6.1, distributed processing allows a user at one computer to use a program on another computer as a backend to process and store information. The user’s computer is the frontend, where data entry and minor processing functions are performed. This arrangement allowed programs to be distributed across computers rather than be centralized. This was also the first time network cables rather than phone lines were used to connect computers. Figure 6.1 Distributed processing By the 1980s, offices were beginning to buy PCs in large numbers. Portables were also introduced, allowing computing to become mobile. Neither PCs nor portables, however, were efficient in sharing information. As timeliness and security became more important, floppy disks were just not cutting it. Offices needed to find a way to implement a better means to share and access resources. This led to the introduction of the first type of PC local area network (LAN): ShareNet by Novell, which had both hardware and software components. LANs simply link computers in order to share resources within a closed environment. The first simple LANs were constructed a lot like the LAN shown in Figure 6.2. Figure 6.2 A simple LAN 314 After the introduction of ShareNet, more LANs sprouted. The earliest LANs could not cover large distances. Most of them could only stretch across a single floor of the office and could support no more than 30 computers. Furthermore, they were still very rudimentary and only a few software programs supported them. The first software programs that ran on a LAN were not capable of being used by more than one user at a time. (This constraint was known as file locking.) Nowadays, multiple users often concurrently access a program or file. Most of the time, the only limitations will be restrictions at the record level if two users are trying to modify a database record at the same time. WANs By the late 1980s, networks were expanding to cover large geographical areas and were supporting thousands of users. Wide area networks (WANs), first implemented with mainframes at massive government expense, started attracting PC users as networks went to this new level. Employees of businesses with offices across the country communicated as though they were only desks apart. Soon the whole world saw a change in the way of doing business, across not only a few miles but across countries. Whereas LANs are limited to single buildings, WANs can span buildings, states, countries, and even continental boundaries. Figure 6.3 shows an example of a simple WAN. Figure 6.3 A simple WAN 315 The networks of today and tomorrow are no longer limited by the inability of LANs to cover distance and handle mobility. WANs play an important role in the future development of corporate networks worldwide. PANs The term PAN is most commonly associated with Bluetooth networks. In 1998, a consortium of companies formed the Bluetooth Special Interest Group (SIG) and formally adopted the name Bluetooth for its technology. The name comes from a tenth-century Danish king named Harald Blåtand, known as Harold Bluetooth in English. (One can only imagine how he got that name.) King Blåtand had successfully unified warring factions in the areas of Norway, Sweden, and Denmark. The makers of Bluetooth were trying to unite disparate technology industries, namely computing, mobile communications, and the auto industry.  Although the most common use of a PAN is in association with Bluetooth, a PAN can also be created with other technologies, such as infrared. Current membership in the Bluetooth SIG includes Microsoft, Intel, Apple, IBM, Toshiba, and several cell phone manufacturers. The technical specification IEEE 802.15.1 describes a wireless personal area network (WPAN) based on Bluetooth version 1.1. The first Bluetooth device on the market was an Ericsson headset and cell phone adapter, which arrived on the scene in 2000. While mobile phones and accessories are still the most common type of Bluetooth device, you will find many more, including wireless keyboards, mice, and printers. Figure 6.4 shows a Bluetooth USB adapter. Figure 6.4 Bluetooth USB adapter  We cover Bluetooth in more detail in Chapter 8, “Installing Wireless and SOHO Networks.” Also, if you want to learn more about Bluetooth, you can visit www.bluetooth.com. One of the defining features of a Bluetooth WPAN is its temporary nature. With traditional Wi-Fi, you need a central communication point, such as a wireless router or access point to connect more than two devices together. (This is referred to as infrastructure.) Bluetooth networks are formed on an ad hoc basis, meaning that whenever two Bluetooth devices get close enough to each other, they can communicate directly with each other—no316central communication point is required. This dynamically created network is called a piconet. A Bluetooth-enabled device can communicate with up to seven other devices in one piconet. Two or more piconets can be linked together in a scatternet. In a scatternet, one or more devices would serve as a bridge between the piconets. MANs For those networks that are larger than a LAN but confined to a relatively small geographical area, there is the term metropolitan area network (MAN). A MAN is generally defined as a network that spans a city or a large campus. For example, if a city decides to install wireless hotspots in various places, that network could be considered a MAN. One of the questions a lot of people ask is, “Is there really a difference between a MAN and a WAN?” There is definitely some gray area here; in many cases they are virtually identical. Perhaps the biggest difference is who has responsibility for managing the connectivity. In a MAN, a central IT organization, such as the campus or city IT staff, is responsible. In a WAN, it’s implied that you will be using publicly available communication lines, and there will be a phone company or other service provider involved. WMNs Wireless networks are everywhere today. If you use your smartphone, tablet, or laptop to look for wireless networks, chances are you will find several. Wireless clients on a network typically access the network through a wireless access point (WAP). The WAP may connect wirelessly to another connectivity device, such as a wireless router, but more likely uses a wired connection to a router or switch. (We’ll talk about all of these devices later in the chapter.) The key defining factor of a mesh network topology is that it has multiple redundant connections. If one fails, another is available to take its place. Therefore, a wireless mesh network is one that uses wireless but has multiple redundant connections to help ensure that communication runs smoothly. While mobility is a key feature of wireless networking, the key infrastructure that wireless clients connect to is generally not mobile. It makes it a lot harder to connect to a hot spot if you don’t know where it will be today! In order to implement a WMN, the access points and other wireless infrastructure must support it. A WMN is then managed through a cloud-based network controller, which allows the administrator to enable, configure, and monitor the network remotely.  In addition to LANs, WANs, and others, the A+ exam 220-1001 objective 2.7 covers Internet connection types. We cover these in Chapter 8. There, we show you the details of each type of connection and factors to consider when choosing one for yourself or a client. 317 Primary Network Components Technically speaking, two or more computers connected together constitute a network. But networks are rarely that simple. When you’re looking at the devices or resources available on a network, there are three types of components of which you should be aware: Servers Clients or workstations Resources  Every network requires two more items to tie these three components together: a network operating system (NOS) and some kind of shared medium (wired or wireless connectivity). These components are covered later in their own sections. Blurring the Lines In the 1980s and 1990s, LANs and WANs were often differentiated by their connection speeds. For example, if you had a 10 Mbps or faster connection to other computers, you were often considered to be on a LAN. WANs were often connected to each other by very expensive T1 connections, which have a maximum bandwidth of 1.544 Mbps. As with all other technologies, networking capacity has exploded. In today’s office network, wired connections slower than 100 Mbps are considered archaic. Connections of 1 Gbps are fairly common. WAN connectivity, although still slower than LAN connectivity, can easily be several times faster than the T1. Because of the speed increases in WAN connectivity, the old practice of categorizing your network based on connection speed is outdated. Today, the most common way to classify a network is based on geographical distance. If your network is in one central location, whether that is one office, one floor of an office building, or maybe even one entire building, it’s usually considered a LAN. If your network is spread out among multiple distant locations, it’s a WAN. Servers Servers come in many shapes and sizes. They are a core component of the network, providing a link to the resources necessary to perform any task. The link that the server provides could be to a resource existing on the server itself or to a resource on a client computer. The server is the critical enabler, offering directions to the client computers regarding where to go to get what they need. 318 Servers offer networks the capability of centralizing the control of resources and security, thereby reducing administrative difficulties. They can be used to distribute processes for balancing the load on computers and can thus increase speed and performance. They can also compartmentalize files for improved reliability. That way, if one server goes down, not all of the files are lost. Servers can perform several different critical roles on a network. For example, a server that provides files to the users on the network is called a file server. Likewise, one that hosts printing services for users is called a print server. Yet another example is a network attached storage (NAS) device that we discussed in Chapter 4, “Custom PC Configurations.” Servers can be used for other tasks as well, such as authentication, remote access services, administration, email, and so on. Networks can include multipurpose and single-purpose servers. A multipurpose server can be, for example, both a file server and a print server at the same time. If the server is a single-purpose server, it is a file server only or a print server only. Another distinction we use in categorizing servers is whether they are dedicated or nondedicated: Dedicated Servers A dedicated server is assigned to provide specific applications or services for the network and nothing else. Because a dedicated server specializes in only a few tasks, it requires fewer resources than a nondedicated server might require from the computer that is hosting it. This savings may translate to efficiency and can thus be considered as having a beneficial impact on network performance. A web server is an example of a dedicated server: It is dedicated to the task of serving up web pages and nothing else. Nondedicated Servers Nondedicated servers are assigned to provide one or more network services and local access. A nondedicated server is expected to be slightly more flexible in its day-to-day use than a dedicated server. Nondedicated servers can be used to direct network traffic and perform administrative actions, but they also are often used to serve as a frontend for the administrator to work with other applications or services or to perform services for more than one network. For example, a dedicated web server might serve out one or more websites, whereas a nondedicated web server serves out websites but might also function as a print server on the local network or as the administrator’s workstation. The nondedicated server is not what some would consider a true server, because it can act as a workstation as well as a server. The workgroup server at your office is an example of a nondedicated server. It might be a combination file, print, and email server. Plus, because of its nature, a nondedicated server could also function well in a peer-to-peer environment. It could be used as a workstation in addition to being a file, print, and email server.  We will talk in more depth about server roles in Chapter 9, “Network Services, Virtualization, and Cloud Computing.” Many networks use both dedicated and nondedicated servers to incorporate the best of both worlds, offering improved network performance with the dedicated servers and flexibility with the nondedicated servers. 319 Workstations Workstations are the computers on which the network users do their work, performing activities such as word processing, database design, graphic design, email, and other office or personal tasks. Workstations are basically everyday computers, except for the fact that they are connected to a network that offers additional resources. Workstations can range from diskless computer systems to desktops or laptops. In network terms, workstations are also known as client computers. Examples include thin clients, thick clients, virtualization workstations, and graphic/CAD/CAM workstations, which you learned about in Chapter 4. As clients, they are allowed to communicate with the servers in the network to use the network’s resources. It takes several items to make a workstation into a network client. You must install a network interface card (NIC), a special expansion card that allows the PC to talk on a network. You must connect it to a cabling system that connects to other computers (unless your NIC supports wireless networking). And you must install special software, called client software, which allows the computer to talk to the servers and request resources from them. Once all this has been accomplished, the computer is “on the network.”  Network client software comes with all operating systems today. When you configure your computer to participate in the network, the operating system utilizes this software. To the client, the server may be nothing more than just another drive letter. However, because it is in a network environment, the client can use the server as a doorway to more storage or more applications or to communicate with other computers or other networks. To users, being on a network changes a few things: They can store more information because they can store data on other computers on the network. They can share and receive information from other users, perhaps even collaborating on the same document. They can use programs that would be too large or complex for their computer to use by itself. They can use hardware not attached directly to their computer, such as a printer. Is That a Server or a Workstation? This is one of the things that author Quentin Docter does when teaching novice technicians. In the room, there will be a standard-looking mini-tower desktop computer. He points to it and asks, “Is that a server or a workstation?” A lot of techs will look at it and say it’s a workstation because it is a desktop computer. The real answer is, “It depends.” 320 Although many people have a perception that servers are ultra-fancy, rack-mounted devices, that isn’t necessarily true. It’s true that servers typically need more powerful hardware than do workstations because of their role on the network, but that doesn’t have to be the case. (Granted, having servers that are less powerful than your workstations doesn’t make logical sense.) What really differentiates a workstation from a server is what operating system it has installed and what role it plays on the network. For example, if that system has Windows Server 2016 installed on it, you can be pretty sure that it’s a server. If it has Windows 7 or Windows 10, it’s more than likely going to be a client, but not always. Computers with operating systems such as Windows 10 can be both clients on the network and nondedicated servers, as would be the case if you share your local printer with others on the network. The moral of the story? Don’t assume a computer’s role simply by looking at it. You need to understand what is on it and its role on the network to make that determination. Network Resources We now have the server to share the resources and the workstation to use them, but what about the resources themselves? A resource (as far as the network is concerned) is any item that can be used on a network. Resources can include a broad range of items, but the following items are among the most important: Printers and other peripherals Disk storage and file access Applications When only a few printers (and all the associated consumables) have to be purchased for the entire office, the costs are dramatically lower than the costs for supplying printers at every workstation. Networks also give users more storage space to store their files. Client computers can’t always handle the overhead involved in storing large files (for example, database files) because they are already heavily involved in users’ day-to-day work activities. Because servers in a network can be dedicated to only certain functions, a server can be allocated to store all of the larger files that are used every day, freeing up disk space on client computers. In addition, if users store their files on a server, the administrator can back up the server periodically to ensure that if something happens to a user’s files, those files can be recovered. Files that all users need to access (such as emergency contact lists and company policies) can also be stored on a server. Having one copy of these files in a central location saves disk space, as opposed to storing the files locally on everyone’s system. Applications (programs) no longer need to be on every computer in the office. If the server is capable of handling the overhead that an application requires, the application can reside on the server and be used by workstations through a network connection. 321  The sharing of applications over a network requires a special arrangement with the application vendor, who may wish to set the price of the application according to the number of users who will be using it. The arrangement allowing multiple users to use a single installation of an application is called a site license. Being on a Network Brings Responsibilities You are part of a community when you are on a network, which means that you need to take responsibility for your actions. First, a network is only as secure as the users who use it. You cannot randomly delete files or move documents from server to server. You do not own your email, so anyone in your company’s management team can choose to read it. In addition, sending something to the printer does not necessarily mean that it will print immediately—your document may not be the first in line to be printed at the shared printer. Plus, if your workstation has also been set up as a nondedicated server, you cannot turn it off. Network Operating Systems PCs use a disk operating system that controls the file system and how the applications communicate with the hard disk. Networks use a network operating system (NOS) to control the communication with resources and the flow of data across the network. The NOS runs on the server. Some of the more popular NOSs are Linux, Microsoft’s Windows Server series (Server 2019, Server 2016, and so on), and macOS Server. Several other companies offer network operating systems as well. Network Resource Access We have discussed two major components of a typical network—servers and workstations—and we’ve also talked briefly about network resources. Let’s dive in a bit deeper on how those resources are accessed on a network. There are generally two resource access models: peer-to-peer and client-server. It is important to choose the appropriate model. How do you decide which type of resource model is needed? You must first think about the following questions: What is the size of the organization? How much security does the company require? What software or hardware does the resource require? How much administration does it need? How much will it cost? Will this resource meet the needs of the organization today and in the future? Will additional training be needed? 322 Networks cannot just be put together at the drop of a hat. A lot of planning is required before implementation of a network to ensure that whatever design is chosen will be effective and efficient, and not just for today but for the future as well. The forethought of the designer will lead to the best network with the least amount of administrative overhead. In each network, it is important that a plan be developed to answer the previous questions. The answers will help the designer choose the type of resource model to use. Peer-to-Peer Networks In a peer-to-peer network, the computers act as both service providers and service requestors. An example of a peer-to-peer resource model is shown in Figure 6.5. Figure 6.5 The peer-to-peer resource model The peer-to-peer model is great for small, simple, inexpensive networks. This model can be set up almost instantly, with little extra hardware required. Many versions of Windows (Windows 10, Windows 8, and others) as well as Linux and macOS are popular operating system environments that support the peer-to-peer resource model. Peer-to-peer networks are also referred to as workgroups. Generally speaking, there is no centralized administration or control in the peer-to-peer resource model. Every station has unique control over the resources that the computer owns, and each station must be administered separately. However, this very lack of centralized control can make administering the network difficult; for the same reason, the network isn’t very secure. Each user needs to manage separate passwords for each computer on which they wish to access resources, as well as set up and manage the shared resources on their own computer. Moreover, because each computer is acting as both a workstation and server, it may not be easy to locate resources. The person who is in charge of a file may have moved it without anyone’s knowledge. Also, the users who work under this arrangement need more training because they are not only users but also administrators. Will this type of network meet the needs of the organization today and in the future? Peer-to-peer resource models are generally considered the right choice for small companies that don’t expect future growth. Small companies that expect growth, on the other hand, should not choose this type of model.  A rule of thumb is that if you have no more than 10 computers and centralized security is not a key priority, a workgroup may be a good choice for you. 323 Client-Server Resource Model The client-server model (also known as server-based model) is better than the peer-to-peer model for large networks (say, more than 10 computers) that need a more secure environment and centralized control. Server-based networks use one or more dedicated, centralized servers. All administrative functions and resource sharing are performed from this point. This makes it easier to share resources, perform backups, and support an almost unlimited number of users. This model also offers better security than the peer-to-peer model. However, the server needs more hardware than a typical workstation/server computer in a peer-to-peer resource model. In addition, it requires specialized software (the NOS) to manage the server’s role in the environment. With the addition of a server and the NOS, server-based networks can easily cost more than peer-to-peer resource models. However, for large networks, it’s the only choice. An example of a client-server resource model is shown in Figure 6.6. Figure 6.6 The client-server resource model Server-based networks are often known as domains. The key characteristic of a server-based network is that security is centrally administered. When you log into the network, the login request is passed to the server responsible for security, sometimes known as a domain controller. (Microsoft uses the term domain controller, whereas other vendors of server products do not.) This is different from the peer-to-peer model, where each individual workstation validates users. In a peer-to-peer model, if the user jsmith wants to be able to log into different workstations, she needs to have a user account set up on each machine. This can quickly become an administrative nightmare! In a domain, all user accounts are stored on the server. User jsmith needs only one account and can log into any of the workstations in the domain. Client-server resource models are the desired models for companies that are continually growing, need to support a large environment, or need centralized security. Server-based324networks offer the flexibility to add more resources and clients almost indefinitely into the future. Hardware costs may be higher, but with the centralized administration, managing resources becomes less time consuming. Also, only a few administrators need to be trained, and users are responsible for only their own work environment.  If you are looking for an inexpensive, simple network with little setup required, and there is no need for the company to grow in the future, then the peer-to-peer network is the way to go. If you are looking for a network to support many users (more than 10 computers), strong security, and centralized administration, consider the server-based network your only choice. Whatever you decide, always take the time to plan your network before installing it. A network is not something you can just throw together. You don’t want to find out a few months down the road that the type of network you chose does not meet the needs of the company—this could be a time-consuming and costly mistake. Network Topologies A topology is a way of laying out the network. When you plan and install a network, you need to choose the right topology for your situation. Each type differs from the others by its cost, ease of installation, fault tolerance (how the topology handles problems such as cable breaks), and ease of reconfiguration (such as adding a new workstation to the existing network). There are five primary topologies: Bus Star Ring Mesh Hybrid Each topology has advantages and disadvantages. Table 6.1 summarizes the advantages and disadvantages of each topology, and then we will go into more detail about each one. Table 6.1 Topologies—advantages and disadvantages Topology Advantages Disadvantages Bus Cheap. Easy to install Difficult to reconfigure. A break in the bus disables the entire network. Star Cheap. Very easy to install and reconfigure. More resilient to a single cable failure More expensive than bus Ring Efficient. Easy to install Reconfiguration is difficult. Very expensive Mesh Best fault tolerance Reconfiguration is extremely difficult, extremely expensive, and very complex. Hybrid Gives a combination of the best features of each topology used Complex (less so than mesh, however) 325 Bus Topology A bus topology is the simplest. It consists of a single cable that runs to every workstation, as shown in Figure 6.7. This topology uses the least amount of cabling. Each computer shares the same data and address path. With a bus topology, messages pass through the trunk, and each workstation checks to see if a message is addressed to it. If the address of the message matches the workstation’s address, the network adapter retrieves it. If not, the message is ignored. Figure 6.7 The bus topology Cable systems that use the bus topology are easy to install. You run a cable from the first computer to the last computer. All of the remaining computers attach to the cable somewhere in between. Because of the simplicity of installation, and because of the low cost of the cable, bus topology cabling systems are the cheapest to install. Although the bus topology uses the least amount of cabling, it is difficult to add a workstation. If you want to add another workstation, you have to reroute the cable completely and possibly run two additional lengths of it. Also, if any one of the cables breaks, the entire network is disrupted. Therefore, such a system is expensive to maintain and can be difficult to troubleshoot. You will rarely run across physical bus networks in use today. 326 Star Topology A star topology branches each network device off a central device called a hub or a switch, making it easy to add a new workstation. If a workstation goes down, it does not affect the entire network; if the central device goes down, the entire network goes with it. Because of this, the hub (or switch) is called a single point of failure. Figure 6.8 shows a simple star network. Figure 6.8 The star topology Star topologies are very easy to install. A cable is run from each workstation to the switch. The switch is placed in a central location in the office (for example, a utility closet). Star topologies are more expensive to install than bus networks because several more cables need to be installed, plus the switches. But the ease of reconfiguration and fault tolerance (one cable failing does not bring down the entire network) far outweigh the drawbacks. This is the most commonly installed network topology in use today.  Although the switch is the central portion of a star topology, some older networks use a device known as a hub instead of a switch. Switches are more advanced than hubs, and they provide better performance than hubs for only a small price increase. Colloquially, though, many administrators use the terms hub and switch interchangeably. Ring Topology In a ring topology, each computer connects to two other computers, joining them in a circle and creating a unidirectional path where messages move from workstation to workstation. Each entity participating in the ring reads a message and then regenerates it and327hands it to its neighbor on a different network cable. See Figure 6.9 for an example of a ring topology. Figure 6.9 The ring topology The ring makes it difficult to add new computers. Unlike a star topology network, a ring topology network will go down if one entity is removed from the ring. Physical ring topology systems rarely exist anymore, mainly because the hardware involved was fairly expensive and the fault tolerance was very low.  You might have heard of an older network architecture called Token Ring. Contrary to its name, it does not use a physical ring. It actually uses a physical star topology, but the traffic flows in a logical ring from one computer to the next. Mesh Topology The mesh topology is the most complex in terms of physical design. In this topology, each device is connected to every other device (see Figure 6.10). This topology is rarely found in wired LANs, mainly because of the complexity of the cabling. If there are x computers, there will be (x × ( x – 1)) ÷ 2 cables in the network. For example, if you have five computers in a mesh network, it will use (5 × (5 – 1)) ÷ 2 = 10 cables. This complexity is compounded when you add another workstation. For example, your 5-computer, 10-cable network will jump to 15 cables if you add just one more computer. Imagine how the person doing the cabling would feel if you told them they had to cable 50 computers in a mesh network—they’d have to come up with (50 × (50 – 1)) ÷ 2 = 1,225 cables! (Not to mention figuring out how to connect them all.) Figure 6.10 The mesh topology 328 Because of its design, the physical mesh topology is expensive to install and maintain. Cables must be run from each device to every other device. The advantage you gain is high fault tolerance. With a mesh topology, there will always be a way to get the data from source to destination. The data may not be able to take the direct route, but it can take an alternate, indirect route. For this reason, the mesh topology is often used to connect multiple sites across WAN links. It uses devices called routers to search multiple routes through the mesh and determine the best path. However, the mesh topology does become inefficient with five or more entities because of the number of connections that need to be maintained. Hybrid Topology The hybrid topology is simply a mix of the other topologies. It would be impossible to illustrate it because there are many combinations. In fact, most networks today are not only hybrid but heterogeneous. (They include a mix of components of different types and brands.) The hybrid network may be more expensive than some types of network topologies, but it takes the best features of all the other topologies and exploits them. Table 6.1, earlier in this chapter, summarizes the advantages and disadvantages of each type of network topology. Rules of Communication Regardless of the type of network you choose to implement, the computers on that network need to know how to talk to each other. To facilitate communication across a network, computers use a common language called a protocol. We’ll cover protocols more in Chapter 7, “Introduction to TCP/IP,” but essentially they are languages much like English is a language. Within each language, there are rules that need to be followed so that all computers understand the right communication behavior. 329 To use a human example, within English there are grammar rules. If you put a bunch of English words together in a way that doesn’t make sense, no one will understand you. If you just decide to omit verbs from your language, you’re going to be challenged to get your point across. And if everyone talks at the same time, the conversation can be hard to follow. Computers need standards to follow to keep their communication clear. Different standards are used to describe the rules that computers need to follow to communicate with each other. The most important communication framework, and the backbone of all networking, is the OSI model.  The OSI model is not specifically listed in the CompTIA A+ exam objectives. However, it’s a critical piece of networking knowledge and a framework with which all technicians should be familiar. OSI Model The International Organization for Standardization (ISO) published the Open Systems Interconnection (OSI) model in 1984 to provide a common way of describing network protocols. The ISO put together a seven-layer model providing a relationship between the stages of communication, with each layer adding to the layer above or below it.  This OSI model is a theoretical model governing computer communication. Even though at one point an “OSI protocol” was developed, it never gained wide acceptance. You will never find a network that is running the “OSI protocol.” Here’s how the theory behind the OSI model works: As a transmission takes place, the higher layers pass data through the lower layers. As the data passes through a layer, that layer tacks its information (also called a header) onto the beginning of the information being transmitted until it reaches the bottom layer. A layer may also add a trailer to the end of the data. The bottom layer sends the information out on the wire (or in the air, in the case of wireless). At the receiving end, the bottom layer receives and reads the information in the header, removes the header and any associated trailer related to its layer, and then passes the remainder to the next highest layer. This procedure continues until the topmost layer receives the data that the sending computer sent. The OSI model layers are listed here from top to bottom, with descriptions of what each of the layers is responsible for: 7—Application layer The Application layer allows access to network services. This is the layer at which file services, print services, and other applications operate. 330 6—Presentation layer This layer determines the “look,” or format, of the data. The Presentation layer performs protocol conversion and manages data compression, data translation, and encryption. The character set information also is determined at this level. (The character set determines which numbers represent which alphanumeric characters.) 5—Session layer This layer allows applications on different computers to establish, maintain, and end a session. A session is one virtual conversation. For example, all of the procedures needed to transfer a single file make up one session. Once the session is over, a new process begins. This layer enables network procedures, such as identifying passwords, logins, and network monitoring. 4—Transport layer The Transport layer controls the data flow and troubleshoots any problems with transmitting or receiving datagrams. It also takes large messages and segments them into smaller ones and takes smaller segments and combines them into a single, larger message, depending on which way the traffic is flowing. Finally, the TCP protocol (one of the two options at this layer) has the important job of verifying that the destination host has received all packets, providing error checking and reliable end-to-end communications. 3—Network layer The Network layer is responsible for logical addressing of messages. At this layer, the data is organized into chunks called packets. The Network layer is something like the traffic cop. It is able to judge the best network path for the data based on network conditions, priority, and other variables. This layer manages traffic through packet switching, routing, and controlling congestion of data. 2—Data Link layer This layer arranges data into chunks called frames. Included in these chunks is control information indicating the beginning and end of the datastream. The Data Link layer is very important because it makes transmission easier and more manageable, and it allows for error checking within the data frames. The Data Link layer also describes the unique physical address (also known as the MAC address) for each NIC. The Data Link layer is actually subdivided into two sections: Media Access Control (MAC) and Logical Link Control (LLC). 1—Physical layer The Physical layer describes how the data gets transmitted over a communication medium. This layer defines how long each piece of data is and the translation of each into the electrical pulses or light impulses that are sent over the wires, or the radio waves that are sent through the air. It decides whether data travels unidirectionally or bidirectionally across the hardware. It also relates electrical, optical, mechanical, and functional interfaces to the cable. Figure 6.11 shows the complete OSI model. Note the relationship of each layer to the others and the function of each layer. Figure 6.11 The OSI model 331  A helpful mnemonic device to remember the OSI layers in order is “All People Seem To Need Data Processing.” IEEE 802 Standards Continuing with our theme of communication, it’s time to introduce one final group of standards. You’ve already learned that a protocol is like a language; think of the IEEE 802 standards as syntax, or the rules that govern who communicates, when they do it, and how they do it. 332 The Institute of Electrical and Electronics Engineers (IEEE) formed a subcommittee to create standards for network types. These standards specify certain types of networks, although not every network protocol is covered by the IEEE 802 committee specifications. This model contains several standards. The ones commonly in use today are 802.3 CSMA/CD (Ethernet) LAN and 802.11 Wireless networks. The IEEE 802 standards were designed primarily for enhancements to the bottom three layers of the OSI model. The IEEE 802 standard breaks the Data Link layer into two sublayers: a Logical Link Control (LLC) sublayer and a Media Access Control (MAC) sublayer. The Logical Link Control sublayer manages data link communications. The Media Access Control sublayer watches out for data collisions and manages physical addresses, also referred to as MAC addresses. You’ve most likely heard of 802.11ac or 802.11n wireless networking. The rules for communicating with all versions of 802.11 are defined by the IEEE standard. Another very well-known standard is 802.3 CSMA/CD. You might know it by its more popular name, Ethernet. The original 802.3 CSMA/CD standard defines a bus topology network that uses a 50-ohm coaxial baseband cable and carries transmissions at 10 Mbps. This standard groups data bits into frames and uses the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) cable access method to put data on the cable. Currently, the 802.3 standard has been amended to include speeds up to 10 Gbps. Breaking the CSMA/CD acronym apart may help illustrate how it works. CS First, there is the Carrier Sense (CS) part, which means that computers on the network are listening to the wire at all times. MA Multiple Access (MA) means that multiple computers have access to the line at the same time. This is analogous to having five people on a conference call. Everyone is listening, and everyone in theory can try to talk at the same time. Of course, when more than one person talks at once, there is a communication error. In CSMA/CD, when two machines transmit at the same time, a data collision takes place and the intended recipients receive none of the data. CD This is where the Collision Detection (CD) portion of the acronym comes in; the collision is detected and each sender knows they need to send again. Each sender then waits for a short, random period of time and tries to transmit again. This process repeats until transmission takes place successfully. The CSMA/CD technology is considered a contention-based access method. The only major downside to 802.3 is that with large networks (more than 100 computers on the same segment), the number of collisions increases to the point where more collisions than transmissions are taking place.  Other examples of contention methods exist, such as Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA). Whereas CSMA/CD tries to fix collisions after they happen, CSMA/CA tries to avoid them in the first place by actively listening and only transmitting when the channel is clear. Wireless Ethernet uses CSMA/CA. 333 Identifying Common Network Hardware We have looked at the types of networks, network topologies, and the way communications are handled. That’s all of the logical stuff. To really get computers to talk to each other requires hardware. Every computer on the network needs to have a network adapter of some type. In many cases, you also need some sort of cable to hook them together. (Wireless networking is the exception, but at the backend of a wireless network there are still components wired together.) And finally, you might also need connectivity devices to attach several computers or networks to each other. Network Interface Cards The network interface card (NIC), also referred to as a network adapter card, provides the physical interface between computer and cabling. It prepares data, sends data, and controls the flow of data. It can also receive and translate data into bytes for the CPU to understand. NICs come in many shapes and sizes. Different NICs are distinguished by the PC bus type and the network for which they are used. The following sections describe the role of NICs and how to evaluate them. Compatibility The first thing you need to determine is whether the NIC will fit the bus type of your PC. If you have more than one type of bus in your PC (for example, a combination PCI/PCI Express), use a NIC that fits into the fastest type (the PCI Express, in this case). This is especially important in servers because the NIC can quickly become a bottleneck if this guideline isn’t followed. More and more computers are using NICs that have USB interfaces. For the rare laptop computer that doesn’t otherwise have a NIC built into it, these small portable cards are very handy.  A USB network card can also be handy for troubleshooting. If a laptop isn’t connecting to the network properly with its built-in card, you may be able to use the USB NIC to see if it’s an issue with the card or perhaps a software problem. Network Interface Card Performance The most important goal of the NIC is to optimize network performance and minimize the amount of time needed to transfer data packets across the network. The key is to ensure that you get the fastest card that you can for the type of network that you’re on. For example, if your wireless network supports 802.11g/n/ac, make sure to get an 802.11ac card because it’s the fastest. 334 Sending and Controlling Data In order for two computers to send and receive data, the cards must agree on several things: The maximum size of the data frames The amount of data sent before giving confirmation The time needed between transmissions The amount of time to wait before sending confirmation The speed at which data transmits If the cards can agree, the data is sent successfully. If the cards cannot agree, the data is not sent. To send data on the network successfully, all NICs need to use the same media access method (such as CSMA/CD) and be connected to the same piece of cable. This usually isn’t a problem, because the vast majority of network cards sold today are Ethernet. In addition, NICs can send data using either full-duplex or half-duplex mode. Half-duplex communication means that between the sender and receiver, only one of them can transmit at any one time. In full-duplex communication, a computer can send and receive data simultaneously. The main advantage of full-duplex over half-duplex communication is performance. NICs (Gigabit Ethernet NICs) can operate twice as fast (1 Gbps) in full-duplex mode as they do normally in half-duplex mode (500 Mbps). In addition, collisions are avoided, which speeds up performance as well. Configuring the network adapter’s duplexing setting is done from the Advanced tab of the NIC’s properties, as shown in Figure 6.12. Figure 6.12 A NIC’s Speed & Duplex setting 335  Normally, you aren’t going to have to worry about how your NIC sends or controls data. Just make sure to get the fastest NIC that is compatible with your network. Do know that the negotiations discussed here are happening in the background, though. NIC Configuration Each card must have a unique hardware address, called a Media Access Control address or MAC address. If two NICs on the same network have the same hardware address, neither one will be able to communicate. For this reason, the IEEE has established a standard for hardware addresses and assigns blocks of these addresses to NIC manufacturers, who then hard-wire the addresses into the cards. MAC addresses are 48 bits long and written in hexadecimal, such as B6-15-53-8F-29-6B. An example is shown in Figure 6.13 from the output of the ipconfig /all command executed at the command prompt. Figure 6.13 Physical (MAC) address  Although it is possible for NIC manufacturers to produce multiple NICs with the same MAC address, it happens very rarely. If you do encounter this type of problem, contact the hardware manufacturer. 336 NIC Drivers In order for the computer to use the NIC, it is very important to install the proper device drivers. These drivers are pieces of software that communicate directly with the operating system, specifically the network redirector and adapter interface. Drivers are specific to each NIC and operating system, and they operate in the Media Access Control (MAC) sublayer of the Data Link layer of the OSI model. To see which version the driver is, you need to look at the device’s properties. There are several ways to do this. A common one is to open Device Manager (click Start, type Device, and click Device Manager under Best match), and find the device, as shown in Figure 6.14. Figure 6.14 Device Manager Right-click the device, click Properties, and then go to the Driver tab, as shown in Figure 6.15. Here you can see a lot of information about the driver, update it, or roll it back if you installed a new one and it fails for some reason. You can also update the driver by right-clicking the device in Device Manager and choosing Update driver from the popup menu. Figure 6.15 NIC properties Driver tab 337  The best place to get drivers is always the manufacturer’s website. When you click Update Driver, Windows will ask you if you want to search for the driver on the Internet or provide a location for it. The best course of action is to download the driver first, and then tell Windows where you put it. Cables and Connectors When the data is passing through the OSI model and reaches the Physical layer, it must find its way onto the medium that is used to transfer data physically from computer to computer. This medium is called the cable (or in the case of wireless networks, the air). It is the NIC’s role to prepare the data for transmission, but it is the cable’s role to move the data properly to its intended destination. The following sections discuss the three main types of physical cabling: coaxial, twisted-pair, and fiber-optic. (Wireless communication is covered in Chapter 8.) 338 Coaxial Cable Coaxial cable (or coax) contains a center conductor core made of copper, which is surrounded by a plastic jacket with a braided shield over it (as shown in Figure 6.16). Either Teflon or a plastic coating covers this metal shield. Figure 6.16 Coaxial cable Common network cables are covered with a plastic called polyvinyl chloride (PVC). While PVC is flexible, fairly durable, and inexpensive, it has a nasty side effect in that it produces poisonous gas when burned. An alternative is a Teflon-type covering that is frequently referred to as a plenum-rated coating. That simply means that the coating does not produce toxic gas when burned and is rated for use in the ventilation plenum areas in a building that circulate breathable air, such as air conditioning and heating systems. This type of cable is more expensive, but it may be mandated by electrical code whenever cable is hidden in walls or ceilings.  Plenum rating can apply to all types of network cabling. Coax Cable Specifications Coaxial cable is available in various specifications that are rated according to the Radio Guide (RG) system, which was originally developed by the US military. The thicker the copper, the farther a signal can travel—and with that comes a higher cost and a less- flexible cable. When coax cable was popular for networking, there were two standards that had fairly high use: RG-8 (thicknet) and RG-58A/U (thinnet). Thicknet had a maximum segment distance of 500 meters and was used primarily for network backbones. Thinnet was more often used in a conventional physical bus. A thinnet segment could span 185 meters. Both thicknet and thinnet had impedance of 50 ohms. Table 6.2 shows the different types of RG cabling and their uses. The ones that are included on the A+ exam objectives are RG-6 and RG-59. 339 Table 6.2 Coax RG types RG # Popular Name Ethernet Implementation Type of Cable RG-6 Satellite/cable TV, cable modems N/A Solid copper RG-8 Thicknet 10Base5 Solid copper RG-58 U N/A None Solid copper RG-58 A/U Thinnet 10Base2 Stranded copper RG-59 Cable television N/A Solid copper Explaining Ethernet Naming Standards In Table 6.2, you will notice two terms that might be new to you: 10Base5 and 10Base2. These are Ethernet naming standards. The number at the beginning tells you the maximum speed that the standard supports, which is 10 Mbps in this case. The word Base refers to the type of transmission, either baseband (one signal at a time per cable) or broadband (multiple signals at the same time on one cable). Legend has it that the 5 and the 2 refer to the approximate maximum transmission distance (in hundreds of meters) for each specification. Later in the chapter, you will see 10BaseT, which refers to twisted-pair cabling. Coaxial networking has all but gone the way of the dinosaur. The only two coaxial cable types used today are RG-6 and RG-59. Of the two, RG-6 has a thicker core (1.0 mm), can run longer distances (up to 304 meters, or 1000 feet), and support digital signals. RG-59 (0.762 mm core) is considered adequate for analog cable TV but not digital and has a maximum distance of about 228 meters (750 feet). The maximum speed for each depends on the quality of the cable and the standard on which it’s being used. Both have impedance of 75 ohms. Coax Connector Types Thicknet was a bear to use. Not only was it highly inflexible, but you also needed to use a connector called

Use Quizgecko on...
Browser
Browser