Summary

This document discusses the OSI model, its seven layers, and how data moves through networks. It also explains how the model is used for troubleshooting and how data formats change as it moves through the layers. The document provides helpful mnemonics and examples.

Full Transcript

In this section of the course, we\'re going to learn all about the OSI model. Now, you may be wondering what is the OSI model? Well, OSI stands for the open systems interconnection. And when we talk about the OSI model, this was developed all the way back in 1977 by the International Organizat...

In this section of the course, we\'re going to learn all about the OSI model. Now, you may be wondering what is the OSI model? Well, OSI stands for the open systems interconnection. And when we talk about the OSI model, this was developed all the way back in 1977 by the International Organization for Standardization. Now, this organization is responsible for creating different standards, which we refer to as ISO and then some number behind it. For example, if you see ISO 7498, that is the standard that refers to the OSI model. Now for the exam, you do not need to worry about memorizing any kinds of ISO numbers, but I do want to introduce you to this concept because everything we cover in computing is going to be associated with some kind of standard with it. Now, in the case of the open systems interconnection model, most people simply call this the OSI Model, but you\'ll also hear it referred to as the OSI Stack sometimes. Either way, we\'re really talking about the exact same thing. Now, the OSI model is an extremely important thing in Network Plus because it\'s a fundamental thing that we\'re going to use to discuss all the pieces and parts of our networks. In fact, it\'s so important that you\'re going to see some questions on the exam about this concept and how it relates to other things in our networks. The OSI model is going to be made up of seven different layers, and we\'re going to talk about each of these seven layers in the next seven videos covering one layer per video because this model is just that important to our understanding of our computer networks. Now, as we work through this section of the course, we\'re going to be focused mainly on domain one, networking concepts, and exclusively we\'ll be covering all of objective 1.1, which states that we must be able to explain concepts related to the open systems interconnection or OSI reference model. Now, the seven layers that make up the OSI model are useful when we\'re trying to troubleshoot a network, because if we have a problem and we think about it from all the different layers in the seven layers, we\'re going to be able to start identifying exactly where the problem is and then we can troubleshoot it more accurately. Now, the OSI model is also called the OSI Reference model, and you may be wondering what do you mean by a reference model? Well, a reference model is simply something that we use to categorize the functions of a network in a particular layer, and that\'s what we\'re going to do when we use the OSI model during our troubleshooting efforts. Now, when you start to look at the OSI model in depth, you\'re going to start to notice that the OSI model doesn\'t actually line up cleanly or easily or accurately with the way that our modern networks are going to operate. For example, some things are going to operate at multiple layers of the OSI model in our modern networks, especially when we\'re discussing the upper three layers. Now, there\'s a good reason for why the OSI model isn\'t perfect the way it is the way our networks are operating today, and that\'s because our networks today operate under a model known as the TCP/IP model. Now, to keep things simple for right now, just remember that the OSI model is a reference model, and we\'re going to use it in our network operations and troubleshooting because again, it\'s very generic in nature and it\'s going to allow us to work with any and all network types we come across, not just the most commonly used network that we use today, which is actually TCP/IP. Now, one of the benefits of using a reference model like the OSI model is that it can be used equivalently across lots of different technologies and devices and manufacturers. So if I\'m going to look at a particular wireless network card for my computer, I can compare how it operates through each of the seven layers of the OSI model and then compare it equivalently to a different manufacturer\'s wireless networking card. When you can understand the functions at each and every layer that\'s being performed, this helps you to better understand the flow of data in your network through the card, how it communicates, and how you can troubleshoot it too. All right, now that I\'ve mentioned there are seven layers of the OSI model several times in this video, I still haven\'t really showed you what those seven layers are. So let\'s take a look at the seven layers of the OSI model. First, we start at the bottom layer, and we work our way from layer one all the way to the top, which is known as layer seven. These layers are the physical layer, the data link layer, the network layer, the transport layer, the session layer, the presentation layer, and the application layer. Now for the exam, you need to remember these seven layers and know them in the proper order. You need to know them going from bottom to top and from top to bottom. Now, to help you with that, I like to use a memory aid known as a pneumonic. Now in the case of the OSI model, I like to think about my favorite kind of pizza when I\'m listing the things from the bottom to the top layer. Now, my favorite kind of pizza is sausage pizza. So if you want to remember the seven layers of the OSI model, just remember what I always say about sausage pizza. Please do not throw sausage pizza away. After all, I think sausage pizza is pretty darn tasty, and therefore we shouldn\'t be wasting it by throwing it in the garbage. Now, by using this sentence, you\'ll be able to remember the seven layers from bottom to top. Just take the letters from the sentence, please do not throw sausage pizza away or P-D-N-T-S-P-A and replace it with the words physical, data link, network, transport, session, presentation, and application. And there you have it, the seven layers of the OSI model from bottom to top, at least that\'s how I like to remember it, but you can use whatever mnemonic you want. I\'ve heard many interesting ones over the years from my different students. Once I was teaching a class to a bunch of navy sailors and they told me their saying was, please do not tell shore patrol anything. For those of you who don\'t know, shore patrols basically like the military police officers in the Navy. So I\'m guessing they didn\'t want to get in trouble with their ship\'s captain. So they believed in the mantra of, please do not tell shore patrol anything. And this kind of thing is like saying what happens in Vegas stays in Vegas, right? Exactly. Now, another one of my students told me that their favorite mnemonic was, please do not teach students pointless abbreviations. Now, I thought this was kind of funny because the abbreviation is exactly what we\'re trying to do here to remember the seven layers of the OSI model. Now, whatever works for you is fine with me. Just remember that you need to memorize these seven layers before you take your exam because on the exam, you are going to get questions about these seven layers. Now you might see something easy like what is the seventh layer of the OSI model? And in that case, you would just pick application and be done with it. Or they might give you something a little bit harder and say, what layer is the session layer at? And you would say, layer five. Now again, they might go a little bit more complex and ask you things about a device and ask you what layer is that device operating in? For example, a router is going to operate at layer three or the network layer of the OSI model, but a hub only operates at layer one because it\'s considered a physical device and therefore it works at the physical layer of the OSI model. Now because of this, we\'re going to cover each and every layer of the OSI model in this section of the course, so you can learn what devices are used at what layer, as well as which protocols and other information about them too. Alright, before we finish up this lesson, we need to talk about one more thing, and that\'s data because our networks are designed for the purpose of making data flow across those networks. Now, data moves through the layers of the OSI model, and as it does, the name of it actually gets changed, and it\'s not called data anymore. So if you hear the word data, what we\'re really talking about is things at layer five, six, and seven of the OSI model because all three of those layers use the term data. As you move down the layers and you go from four to three to two to one, that name is going to change each and every time. Now, when we\'re at layer four or the transport layer, we are going to call it a segment. When we get to layer three or the network layer, we call it a packet. When we get to layer two or the data link layer, we call it a frame. And finally, if we\'re operating at layer one or the physical layer, we convert the data into ones and zeros and we send it across the media and we refer to this as bits. Now, this is something you\'re going to need to memorize for the exam too, but don\'t worry, I\'ve got another pneumonic to help you remember this one. And this one is really just a simple question you can ask yourself. Here it is. Do some people fear birthdays? That\'s it. I think this is a valid question because as we get older, sometimes people start fearing their birthdays because they don\'t want to show their age and they don\'t want to appear older. But again, I know this is really silly pneumonic, but hopefully it helps you remember the four types of information as they flow from data through those upper layers and down to the lower levels, going from data to segments to packets, to frames to bits. Alright, if you can remember some people fear of birthdays, that\'s going to give you D-S-P-F-B as your pneumonic. And then again, we have those data types going from layers five, six, and seven with data moving down to segments, packets, frames, and bits going from layers four, three, two, and one. So one more time for the exam. It is really important to understand the seven layers of the OSI model and how the data moves up and down through those layers as you\'re transmitting the data and encapsulating and decapsulating. And that\'s what we\'re going to be talking about. Alright, as we begin this section, we\'re first going to spend one lesson on each of the different layers of the OSI model, as I said. Then we\'re going to move into our coverage of encapsulation and decapsulation. Encapsulation is the process of wrapping data with protocol information at each layer of the OSI model as it passes down the networking stack. Decapsulation, on the other hand, is the process of removing that protocol information layer by layer as the data unpacks itself on the receiving end as it moves up the OSI model. After that, I\'m going to demonstrate how you can use a software tool known as Wireshark. Now, Wireshark is a network protocol analyzer that captures and displays data as it travels back and forth over a network. And this can be done in real time or captured and analyzed later. Now, we do this for the purposes of inspection, [troubleshooting, and analysis.] And in our case, we\'re going to use Wireshark to analyze a packet capture so you can better see how the data is broken down into these different layers of the OSI model. And then finally, at the end of the section, we\'ll take a short quiz to see if you remembered what you learned in this section. As we do that, we\'ll also review your answers to make sure you know why the right answers were right and the wrong answers were wrong. So with all that being said, let\'s jump into our lessons on the seven layers of the OSI model in this section of the course. If we start on the bottom of the OSI model, we\'ll find our first layer, the physical layer. Now, this is where bits are transmitted across the network and includes all the physical and electrical characteristics of this network. So, this is going to tell us whether we\'re using an ethernet network, whether we\'re using fiber or copper cables, whether we\'re using Cat 5 or Cat 6, and even if we\'re using radio frequency in the case of Wi-Fi. Regardless of which method we\'re using to send our data across this first layer, it\'s always going to occur as bits, and these are binary bits. These are going to be a series of ones and zeros. Now, each media has a different way of representing these bits, these series of ones and zeros, because these series of ones and zeros are the basic building blocks of all of our data. For example, if I\'m using a copper wire, such as in a Cat 5 or a Cat 6 network, you may see that there\'s zero voltage on the wire when you have a zero bit. And then if you want to represent a one, you might use plus five or negative five volts on that copper wire. Now, when we switch between these two modes, this tells us whether we should read a one or a zero on the network. And this is called Transition Modulation. Now, for the exam, you don\'t need to understand the specifics of transition modulation, but you should understand this basic concept. On this wire, we\'re going to have one level that represents one and another level that represents zero. Let me give you another example. This time, let\'s pretend we\'re using a fiber optic cable. Now, with fiber optic cables, we\'re going to use light instead of voltages. Now, it\'s similar to the way we did voltages, but instead when we want to represent a one, we turn the light on. If we want to represent a zero, we turn the light off. Now, we can just read the state of the light. Is it on? That\'s a one. Is it off? That\'s a zero. And when there\'s a transition between, that tells us between these two modes, whether we should be reading this as a one or a zero. Now, as we start understanding that we then have to look at the cables themselves, because this is also part of our physical layer. If we\'re using something like a Cat 5 or a Cat 6 cable, we may have a certain connector on the end called an RJ45, which allows us to plug that cable into the back of a computer or into a switch. Now, the way that connector is wired is based on a certain standard. We use two standards inside our network, TIA/EIA-568A and TIA/EIA-568B. Now, we\'ll talk about these and which way these pins actually are set up inside this connector in a future lesson. This is going to be important, because as we start talking about these, this is going to tell us whether or not we\'re using crossover cables or straight-through cables. If we use a crossover cable, we\'re actually going to flip the transmission and receive bits on the end of the cable. So, one end will be the A standard and one end will be the B standard. But if we\'re using a straight-through cable or a patch cable, we\'re going to have the B standard on both sides. Now again, it\'s important to understand these wiring standards, so we\'re going to spend some time on them in a future lesson, because on the exam you may be asked to wire up an ethernet jack. Maybe they\'re going to tell you to make a crossover cable, And you have to drag the right colors to the right pins, that kind of a thing. And so, to make sure you\'re ready for that, we\'re going to cover that in a separate lesson. Now, at this point, we\'ve talked about having our cables, we\'ve talked about how we represent bits on those cables, and we talked about how we\'re going to set up the connectors of those cables, but there\'s another thing we have to think about at the physical layer, and that\'s the topology of the network. How are we actually running these cables to physically connect the different devices together? Well, when we look at this from a layer 1 perspective, we can look at this based on the things we talked about in the last section of the course. Is it a bus? Is it a ring? Is it a star? Is it a hub and spoke? How about a full mesh, a partial mesh, or any other topology that we discussed? When it comes to figuring this out, you\'re going to look at how they\'re physically cabled, if you drew them out, does it make a line like a bus, a ring going in a circle, or a star pattern? And that\'ll tell you what physical topology you have. This again, is a layer 1 issue. Another issue that we have to concern ourself with at layer 1 is synchronizing our communications. We have to ask ourself, how does the receiving end know if it\'s ready to accept ones and zeros that we\'re going to send it? Now, this sounds like a really easy thing to do, if we\'re talking to each other, but with computers this can get much more complicated. So, to make sure that we understand this, we have two things that can happen. It can either be transmitted asynchronously or synchronously. Now, when I\'m looking at asynchronous communication, you should be able to consider something like a voicemail. You call up your friend, they don\'t answer, and so you leave a message so they can listen to it later, the communication happens out of sync or out of time. You do it and then later on they can go back and listen to it. Now in networks, this happens via a start and stop bit, similarly to how your friend can press play on their voicemail system to listen to their message, a network can send a start bit when it wants to start beginning the transmission, and then a stop bit to tell the other side, \"Hey, I\'m done transmitting. You\'ve gotten everything I\'m going to send.\" Now, if we decide to go and do communications synchronously, on the other hand, we have to be in the same place at the same time. So, in our previous example, if your friend picked up the phone and you had a conversation, you could talk to them and they could talk to you. This is a synchronized conversation, because you\'re both talking at the same time. Now, this communication happens in real time. That\'s what\'s great about synchronization. Now, as far as when we start talking about this from a network perspective, instead of using a start and a stop bit for synchronizing, we would use some sort of common time source. And so, maybe we\'re all going to use a clock, and every time a second passes, we can transmit and receive, and that tells us that we\'re going to do it on the cadence of the beat. That would be something that is synchronous. Now, in addition to figuring out, if it\'s going to be asynchronous or synchronous, you also have to figure out how you\'re going to utilize the bandwidth of the cable. And there\'s two main ways that you can do this. One is called broadband and one is called baseband. Now, broadband is going to divide our bandwidth into separate channels. If you have a TV service at your house, you\'re probably familiar with this, because you have a single cable coming into your house, but it carries 200 or more channels. The user then is going to choose a single channel, and the rest are going to be filtered out. In opposition to this, we have baseband where you\'re going to use all of the frequency of the cable all of the time. So, a telephone for instance, uses baseband communication, which means that when you pick up the phone, you\'re using all the bandwidth allocated to that phone line. This doesn\'t hold true with the cable TV signal, right? Because we had 200 channels all using some of that bandwidth, but we only pulled out the ones we wanted. For this reason, we can only make one call at a time when using a phone, but we can have 200 channels or more on our TV. Now, when we use baseband, we\'re going to use a reference clock that allows us to send the information for both the sender and the receiver at this certain time. By using this reference clock, this is an example of using synchronous communication. Another good example of a baseband network is a wired home ethernet network, because this is going to use all of the frequency that\'s available on your cable, giving you more bandwidth than you would, if you had a broadband area. Now, in this case, if we have a single baseband using up all of the bandwidth, we need to figure out how to get more out of it. And so to do this, we have a couple of different mechanisms we can do, and the first one of these is what\'s known as Time-Division Multiplexing. In this mode, each session is going to take turns using a dedicated time slot to get part of that bandwidth from the baseband. Now, an easy analogy to this is if you have a house with a single TV in it, but you have four family members. Everyone wants to watch TV, but there\'s only one TV, so they\'re going to have to take turns picking what program\'s going to be on. Now, in a pure time-division multiplexing environment, each person is going to be assigned a time slot and they can pick whatever TV show they want during that time slot. Now, this may or may not line up with the time that their show is actually on, and that could cause a problem, right? So, we have this second method called Statistical Time-Division Multiplexing, or StatTDM. This is a more efficient version of Time-Division Multiplexing, because it\'s going to dynamically allocate these time slots based on when people need it. So, if we take our TV example, maybe for example I want to go down and nobody\'s watching TV at eight o\'clock, but it wasn\'t my time slot. Well, under Time-Division Multiplexing, I couldn\'t turn on the TV, because it wasn\'t my time slot, but with StatTDM I can, because as long as nobody\'s using the TV, anyone\'s free to use it. Everyone\'s going to take turns based on their necessity, not based on the time itself. So instead, when I start watching the TV at eight o\'clock, then after my 30 minutes is up, I\'m going to get off at 08:30 so somebody else can get on. Even though my assigned time slot under TDM might\'ve been something like 09:00 to 09:30. Now, our last method is what\'s known as Frequency-Division Multiplexing, or FDM. This is going to involve taking the medium, that cable, and splitting it up into channels similar to the way we do in broadband. So, if I take a single cable and I break it up into 50, 100, or 200 different frequencies, then each person can get a small portion of frequency allotted to them and they can use it as much as they want. Now, for the exam, the good news is you don\'t need to memorize TDM, StatTDM, and FDM, but rather you just need to understand that multiplexing involves taking some limited amount of resource and using it more efficiently. In the real world, you may come across these multiplexing techniques, especially if you start working as a network engineer or a network architect, and that\'s why I wanted to introduce \'em to you. But for the Network Plus exam, just remember, multiplexing allows multiple people to use a baseband connection at the same time. The final thing we need to talk about in this lesson is some examples of physical or layer 1 devices. The most common one is going to be a cable. So, if I have a fiber optic cable, or an ethernet cable, or a coaxial cable, these are all different types of media. And if I have different types of media, that\'s considered a layer 1 device. The reason is, whatever goes in one end of the cable is going to come out the other end of the cable. So, if I have a fiber optic cable and I put light in one end, I\'m going to get light out the other end. That\'s a physical response, a physical layer of the OSI model. Additionally, beyond wired cables, we also have wireless things. Things like Bluetooth, and Wi-Fi, and Near Field Communication (NFC). All of these radio frequencies make up the media at layer 1 for those type of networks. The final example is infrastructure devices, and that would be things like hubs, access points, and media converters. All of these devices operate at the bit layer. This is going to be a function should just simply repeat what they get. So, if I have a hub, whatever goes in port one of the hub is coming out ports two, three, and four, whatever comes in, gets repeated out. The same thing with a media converter. If I have something coming in over coaxial, it\'s going to get converted through media and pushed out over fiber optic. That device is simply doing it at the physical layer, and whatever comes in is going to go out. There\'s no logic to it. There\'s no intelligence to it. Layer 1 devices simply repeat whatever they\'re told. Now, we\'ll talk about some other infrastructure devices as we get into layers 2 and 3 and 4, but for right now, I want you to remember layer 1 is dumb devices, they\'re simply repeaters. Whatever they take in, they send right back out. Welcome to layer two of the OSI Model, the data link layer. In the data link layer, we\'re going to package up the bits we got from layer one and put those into frames, and then we\'re going to take those frames and transmit them throughout the network while performing some error detection, correction, identifying unique network devices using MAC addresses, and we\'re going to provide some flow control. Now a MAC address is a media access control address, which is a means for identifying a device physically and allowing it to operate on a logical topology. So when we started talking about physical topologies in the last lesson and the last layer, we dealt with things physically, but now we have to deal with things on a logical level. These MAC addresses are incredibly important for dealing with switches and other layer two devices. When it comes to identifying MAC addresses, every manufacturer of a network card assigns a unique 48-bit physical addressing system to every network interface card they produced. As you can see here, we have 12-digit hexadecimal numbers that are used to represent these MAC addresses. These MAC addresses are always written hexadecimally, where each of the letters or numbers is considered four bits. The first 24 bits, or the 6 letters as you can see here, identifies the particular vendor who made that card. In our example, we have D2:51:F1, and this is going to uniquely identify whichever person made this card. I like to think about this like a Social Security Number in the United States. If you look at the first three digits, it\'s going to identify the state and the year that person was born in. For instance, if my social security number was 123-45-6789, the first three digits might say when and where I was born. Maybe it was California in the year 1955. And that the rest of it, those other six digits, are going to uniquely identify me. Well, this is the same thing that happens with a MAC address. The first half of the MAC address, the first six digits, is going to tell us who made it. Was it made by Apple, Dell, Ralink, or whatever? The second half is going to represent the exact machine it belongs to. This is important for our logical topology because we can look at the MAC address and observe the flow of data going through our networks. And at this point, we don\'t really care how these devices are physically connected. The issue at that point is a level one issue. But now at level two, we care about whose turn it is to talk and transmit so other devices aren\'t talking over each other. For example, when I teach this course in a classroom environment, instead of all of the students shouting out their answers at once, we use a system of raising our hands. We wait for the teacher to call on one of the students, and then we can let them ask a question. This is how we control the information flow so that everyone can hear each other. In a network, we use electronic mechanisms to do this same thing. Now logical link control is going to provide connection services and allow your recipients to acknowledge the messages have actually gotten where you thought they were going. So for example, if I called up and I asked if you got my phone call, you could say yes and that would acknowledge the receipt of that and then we can move on to the next message. Logical link control does this for our networks, and because of this, it\'s the most basic form of flow control. Essentially, it\'s going to limit the amount of data that a sender can send at once and allow the receiver to keep from being overwhelmed. So if I go back to my classroom example, if I\'m sitting there and I\'m moving too quickly, a student might raise their hand and say, \"Hey Jason, I don\'t understand this. Can you slow down and repeat it?\" In the case of this video, you can just pause or go back and watch that part again. But in a classroom, they can\'t so they may ask me to repeat it. Logical link control, it similarly does the same thing, allowing a device to make this request for either less information at a time or to replay that information. Logical link control also gives us some basic error control functions, such as allowing the receiver to inform the sender if their data frame wasn\'t received or if it was received corrupted. And it does this by using a checksum. Now since everything it receives is just a series of ones and zeros, the receiver\'s going to add all of these up and the last bit will either be even or odd. If it matches, they add them all up and they\'re even, then it\'s going to assume that this was good if you had received a zero, meaning it was even. If the last bit was odd, meaning it was a one, and they added up all the numbers and they got an odd number, that means it was good as well. But if not, they can figure that something was bad and then ask for a retransmission of the frame. Now communication can be synchronized across layer two, according to three different schemes. We have something known as isochronous mode, which happens when the networks use a common reference clock, similar to synchronous, yet they also create time slots for transmissions, much like we did with time-division multiplexing. This has less overhead than either the other two modes because both devices know when they can communicate and for exactly how long. The second method we can use is known as synchronous method, and this is much like we use back in layer one. It\'s going to involve devices using the same clock, but the reason it\'s different from isochronous is that this is going to allow us to have beginning and ending frames and special control characters to tell us when we\'re going to start and when we\'re going to end based on those beats. For example, if I use in music, I have songs that have various time signatures, things like 3/4 or 4/4 timing. This tells us how many beats are in each measure. Our networks operate much the same way in that our devices can only communicate at frequency specified by these particular clock cycles. Because of this, there isn\'t a lot of gap time that isn\'t already properly utilized, and this becomes a major drawback for synchronized mode. And finally, of course, we have asynchronous, which is going to allow each of our network devices to reference their own clock cycles and use their own start and stop bits. In this way, there\'s no real control over when the devices are allowed to communicate though, and that becomes the major drawback here. Now when we look at layer two devices, we have things like network interface cards, bridges, and switches. In contrast to how a hub is a dumb machine that simply relies on a message coming in and repeating it back out, switches are smarter. They can actually use logic to learn which physical ports are attached to which devices based on their MAC addresses. And in this way, they can send data to specific devices in the network, allowing us to pick up and choose different lines of communication to go to different areas. Now we\'ll talk all about how this works and how these switches do this, including things like CAM tables using the MAC addresses and how they\'re doing the switching across the network in later lessons. And we\'ll go into depth in that because you will need to understand that to understand how networks really work. But for right now, just remember that switches, bridges, and MAC addresses are three great examples of things that operate at layer two, the data link layer. Now that we\'re at the network layer, we\'re concerned with routing. Layer three is all about how we\'re going to forward traffic, which we refer to as routing, using logical addresses. For example, your computer has an IP address, and that IP address is either going to be an IPv4 or an IPv6 address, or both. Now, both of these are considered layer three protocols. And we\'re going to talk more about them as we go through this course. Now, the other thing we\'re going to be concerned with here is logical addressing. I mentioned that IPv4 and IPv6 are two types of logical addresses, but they\'re not the only logical addressing schemes that are out there. They\'re just the most common and most popular these days. Now, we\'re also going to be concerned with what\'s known as switching. And here when we talk about switching, we\'re actually talking about layer-three switching, which is called routing. Now, I know this gets confusing because they\'re using the term switching to refer to the function of routing, and when we talk about switches, the devices being layer two devices. So you have to keep this straight in your head. Switches, the physical device are layer two. Switching is another term for routing, which is how we transfer things at the network layer, layer three. Now, as we talk about all this, another thing we comes up is how we\'re going to do route discovery and selection. Now basically that means, how do I know which way I want the traffic to go? We\'re going to talk a little bit more about that too as we go through this lesson, because we\'re going to talk about connection services, and bandwidth utilization, and multiplexing strategies. All of this is at the layer three of the OC model. So to start diving in deeper into these concepts, let\'s start out with logical addresses. There are lots of different routed protocols that have been used over the years. Back in the \'80s and \'90s, there was AppleTalk for Apple computers. And if you used a Windows or a Novell network computer, you might have used IPX, which was the Internetwork Packet Exchange. Now, neither of these two are important for the Network+ certification, but it is something that I want to bring up because you may hear these terms. Really what happened was these were both killed off by internet protocol, which is known as IP. This became the common protocol that everyone uses on all networks. And therefore we didn\'t need AppleTalk and we didn\'t need IPX anymore. But the point is, at layer three, it\'s not just IP. There are other protocols that you could use in layer three. IP is just the most common. Now, some of those are still existing on some legacy systems, which means old systems in some corporate network, but the routing protocol of the internet that we use and the internet you have at home and the network you have at home is going to be known as IP. IP comes in two variants, as I said before, IPv4 and IPv6. Now, if you look on the screen here, there\'s an example of an IP address. It\'s written as 172.16.254.1. Now we\'re going to look more at IP addresses in a separate lesson as we dig deeper into routing later on. For the time being, I want you to think of an IP address anytime you see a number that looks like this. There\'s going to be four sets of numbers separated by dots. This is called a dotted octet notation. And this is what an IPv4 address is going to look like. Now, how should we actually forward or route the data across our networks? This is really the big question at layer three. And there are three main ways for us to do this. You can use packet switching, circuit switching, or message switching. The most commonly used one in your network is going to be routing, which is also known as packet switching. This is where data is divided into packets, and then each packet is forwarded on based on its IP address. Now, when I think of routing, I like to think about this as if I\'m going to write a letter and send it to my mom. Let\'s say I put that letter in an envelope, and on the outside of the envelope, I\'m going to write the address of my mom on it. I put her city and her state and the zip code. Now I put that in the mailbox, and the mail carrier is going to take it to a central location. Here, they\'re going to figure out what state it goes to. And then they\'re going to send it to that state\'s post office. Once it\'s in that state, it\'s going to go down even further. And they\'re going to go down to the city-level post office. And then from that city-level post office, they\'re going to look at where the street address is, get it to the right street, and eventually get it to her house on that street. The same kind of concept works with IP addresses. And that\'s the idea here. It\'s going to keep going and switching that packet from place to place until it gets to its final destination, in the case of my envelope to my mom\'s address. Now that\'s the way that packet switching is going to work for us. Every time I send a letter out, it might take a different route to get there. And I really don\'t care which route it takes as long as it gets to its final destination. It\'s the same thing with our packets in the network. When I talk about circuit switching though, this is where we want to have the same path each and every time. We\'re going to get a dedicated communication link that\'s established between our two devices. And if I pick up the phone to make a phone call, I\'m actually going to make a virtual connection from my phone over to the other receiver\'s phone on the other end. So if I pick up the phone to call you, there\'s going to be a temporary connection made between my phone and your phone, and all the data that we\'re talking back and forth will go across the same path to get from me to you. The whole time we have this conversation going on, that\'s what\'s going to happen. That\'s what we call a circuit switch connection, which is different than the packet switch connection of using an envelope where we don\'t care where all those envelopes go as long as they got to the right place at the end. Now, when we hang up the phone and we go to make another call, it could take a different path. And that\'s okay. But for the entire session of us having our phone call, we want the same path. And that\'s what circuit switching allows us to do. Now, the third type of switching we have is known as message switching. And with message switching, this is where all the data is divided into messages. And they\'re similar to packet switching in this idea, but the messages can actually be stored and forwarded more like email. So if you go back to my mail example, maybe it gets to my mom\'s state, but the post office is closed because it\'s Sunday. So what happens is they drop all those envelopes on the floor. Now, it\'s going to be held there until they open again on Monday and somebody\'s going to be able to pick it up, figure out where it goes and push it along. This is what happens when you\'re dealing with message switching, because it has this store and forward capability. If we were using just packet switching, what would end up happening is if it got to the post office and the post office was closed, it would actually just shred that envelope and nobody would ever see it. That\'s a bad thing if we want to make sure the data\'s going to get where it\'s going. And that\'s why message switching can be very useful for us. Now, almost all of our networks nowadays and the ones you utilize are going to be using packet switching though. And the reason is we have other methods that will check if something is not getting to the distant end. And it\'ll be resend over another path until it finally gets there. So unless you\'re dealing with some kind of big backend networks, you\'re not really going to see something like circuit switching or message switching in your normal everyday networks. Your home network, my small office network, and most of the internet actually works using packet switching. Now, the second thing we have to talk about is route discovery and selection. How are we going to decide which path we\'re going to take to send that message? Well, routers are going to maintain a routing table so they can understand how to forward a packet based on the destination IP of where it wants to get to. There are lots of different ways that it can do this. And they can do this either as a static route or a dynamically assigned route using a routing protocol like RIP, OSPF, EIGRP, and many others. Now, we\'re going to talk about many of those later on in this course. So we\'re not going to talk specifically about how it works right now. We\'re just going to put that to the side, but I want you to remember that routing protocols help us decide how data is going to flow across the network and how the routers are going to communicate that information. For now, let\'s just use the example on the screen to give us a really basic idea of how routing works. Let\'s say that I\'m sitting in router number five at the bottom right corner and I want to get to router number one. Well, how should I do it? I can go from five to four to one, and that would work. But I can also go from five to four to three to two to one, and that would also work. So how do I know which way is going to be the best way for me to go? Well, if I end up using a dynamic protocol, all of these routers continually talk to each other all the time, and they tell each other which way they know how to get to other routers and which one is the best and fastest route. So if you think about this like streets, when you type into your GPS that you want to go from point A to point B to get to the grocery store, it may take you three or four different ways depending on the time of day, the traffic, the congestion, and a number of other factors. Routers are doing the exact same thing. All talk to each other and say, \"Hey, I\'ve got a better way for you to get from point A to point B because there\'s too much traffic on this direction. So you should go and take this other route instead.\" That\'s the idea with route discovery and route selection. Now, the next thing we need to talk about here is connection services. And connection services are going to augment our layer-two connection services that we talked about previously and provide us with some additional reliability. Again, we\'re going to have some more flow control added here. And this is going to prevent the sender from sending data faster than the receiver can get it. Again, that\'s the way that we have flow control there, so it can say, \"Hey, hey, hey, slow down. You\'re sending me too much data,\" or, \"Speed up, I can take more. I\'m ready for more.\" We also have this thing called packet reordering. Now, packet reordering is really important because it allows us to take this big chunk of data, cut it up into little pieces of packets, and then send all those packets off in different directions to get to their final destination. Now, the problem is sometimes these packets are going to arrive at the destination in the wrong order. And so packet reordering allows them to get all this data at the end destination, at the receiver, and they can take and say, \"Okay, I got packet one and packet five and packet two and packet four and packet three. and then I\'m going to put them in the right order. One, two, three, four, five, and then I can put that data back together in what it is. And now I have the full piece of data together.\" The benefit here is that because of routing, each packet gets numbered and sequenced, and so even if they get to the other end out of order, we can put them back into the right order and read them as a coherent message. The next thing we need to talk about here at layer three is known as ICMP, or the Internet Control Message Protocol. ICMP is used to send messages and operational information to an IP destination. The most commonly used one is known as ping, P-I-N-G. And we\'re going to talk specifically about that tool in our troubleshooting lecture. As you can see in this example, we can send out a single packet as a test to example.com. And when it comes back, we can then say if that site is up or down. This is what ping does. It sends out a packet and tells us if it was received or not by the distant end and how long it took. In this case, we got a response back five different times showing that it was up and we were able to get to that distant end. Now, this is not a tool that\'s used regularly by end user applications, but it is used by us as administrators to help troubleshoot our network and figure out what is up, and what is down, and what isn\'t working. Again, the most commonly used one here is going to be ping, but there\'s another variation of it known as traceroute, which will trace the route that a packet takes through the network and tells you every single router along the way as it goes through, essentially doing a large series of pings through each and every router so you can figure out which routes we\'re up and which routes we\'re down. Now, what are some examples of layer-three devices that we need to remember for the exam? Well, the first two you have to remember are routers and multi-layer switches. Router looks like this icon here. You can see it\'s on the screen. It\'s a circle with four arrows. And this is a depiction of what a router looks like in a logical diagram. Now, a multi-layer switch works like a regular switch and our router combined. So it has both features of a layer-two switch and a layer-three router in the single device, which is why it\'s considered a layer-three device. Again, for the exam, remember that a switch is always a layer-two device unless they specifically tell you that it is a multi-layer switch. If it\'s a multi-layer switch, it is going to be considered a layer-three device. Now, some other things that we have is going to be things like IPv4 and IPv6. These are both layer-three protocols. We also have ICMP, the Internet Control Message Protocol, that we just talked about that\'s used in troubleshooting. All of these are found at layer three. The best one to remember is IP and routers, because these are going to be the most common ones you\'re going to see on test A if they ask you for examples of a layer-three device. Layer 4, the transport layer. Now, the transport layer is our dividing line between what we call the upper layers of the OSI model and the lower layers of that OSI model. Now, we\'ve already covered the lower layers when we talked about the physical, the data link, and the network layers. And so now we\'re going to move into the upper layers, starting with this layer, the transport layer, the session layer, the presentation layer, and the application layer. Now in the next couple of lessons, we\'re going to cover each of these as we go forward. Now, segment is our data type here when we\'re dealing with the transport layer. When we deal with segments and datagrams, we\'re talking about the transport layer. Now, as we talk about datagrams, we\'re going to go into those a little bit more later in depth. But for now, let\'s focus on the two protocols that we have inside Layer 4, which are the TCP and the UDP protocols. And we\'re also going to introduce a couple of extra reliability features here, known as windowing and buffering. Now, what is TCP? TCP is a transmission control protocol. It is a connection-oriented protocol, which means it\'s a reliable way to transport segments across our network. Now, if a segment is dropped, the protocol will actually ask for acknowledgement each and every time. If it doesn\'t get that acknowledgement, it\'s going to resend that piece of information. That\'s why we call this a connection-full protocol, because it has this two-way type of information where I\'m sending you information and I\'m verifying that you actually got it by listening that you got it and you give me a response. Now, let\'s look at this little diagram here on the screen for a second. You\'re going to see that I have a client on the left and a server on the right. Now, the client is going to send what\'s called a SYN packet, or a synchronization packet, over to the server. Now, when the server gets that, it\'s going to send back a synchronization acknowledgement to the client, known as a SYN-ACK. Now, when the client gets that acknowledgement, it\'s going to send back its own acknowledgement to the server. This is known as the ACK. Now when we do this SYN, SYN-ACK, ACK, this is what we refer to as a three-way handshake. Essentially, it\'s the client going, \"Hey, server, are you ready to get some information?\" And then the server says, \"Sure, why not? \"Send me some information.\" And the client says, \"Okay, here it comes.\" And then the transmission is going to begin. Because we\'ve established that three-way handshake, and we know that both sides are ready to communicate. \"Now, are you ready?\" \"Yes, I am.\" \"Here it comes.\" Now, every time this data, which we call a segment, is sent across the network, there is going to be an acknowledgement that it was received, and that tells us there was successful two-way communication occurring. Now, if the server\'s expecting to get 100 pieces of information but it only got 98 of those, it\'s going to say to the client, \"Hey, you told me you were going to send me 100 things, \"but you only sent me 98. \"Send me over those two things that I\'m missing.\" And then a retransmission occurs. This way, the communication can go forth, and we can always make sure we\'re getting what we\'re supposed to because we have this resending of the packets across the network. Now, this is used for all network data that needs to be assured to get to its final destination. I like to think about this like certified mail. If I want to send a message to the IRS, for example, I want to make sure that they get it and it doesn\'t get lost in the mail. So I might pay a little extra money to get a certified receipt that when it gets there, they have to sign it, and that gets mailed back to me. This way, when I get that receipt back, I know that the IRS got my mail package. That\'s the way TCP works. Now, on the other hand, we have another protocol known as UDP. UDP is what we call a connectionless protocol, meaning it doesn\'t have to wait for connections. UDP stands for the user datagram protocol. And the reason why we call it a datagram is because if you\'re using UDP, you\'re using this type of data. It\'s called a datagram. And so for the exam, I want you to remember that Layer 4 is for segments almost exclusively because we use it with TCP. But if you\'re using UDP, this is now called a datagram. So if you have a datagram or a segment, you\'re in Layer 4. Now, when we talk about UDP, UDP is unreliable, and it transmits segments called datagrams, and if they\'re dropped, the sender will never even know that it happened. Now, why would I want to send stuff where the sender isn\'t aware of it and I don\'t get any kind of receipt? Well, UDP is really good for audio and visual streaming because you send a lot of data and there\'s a lot less overhead when you use UDP because we don\'t have that constant three-way handshake to establish it and we don\'t have all the checks and balances that are associated by using TCP. So by using UDP, you can really increase the performance of your network because you\'re going to have zero retransmissions. You\'re just going to end up dropping information. Now, isn\'t that a bad thing? Why would we want to drop information? Well, for certain applications, it really doesn\'t matter. For example, you\'re streaming this video right now, and if I dropped out for one 1/100th of a second, would you even notice? Well, you probably wouldn\'t. And that\'s why UDP is so good because we can drop one 1/100ths of the time here and you\'re really never even going to notice it, and there won\'t be a retransmission. But with TCP, it\'s going to lead to a lot more buffering because you have to wait and then get it resent to you and then put it in the right place and then play it back. And so because of that acknowledgement and that overhead for every single second of this video, it\'s going to end up making it a lot larger and use a lot more bandwidth. And that\'s one of the big reasons why we use UDP for video streaming and audio streaming. Now, let\'s do a quick little summary here of TCP versus UDP because this is a really, really important concept. In fact, if you have your notes out right now, I would write down this chart that I\'m going to tell you right now as we talk about TCP versus UDP, because it really is that important. Now, first, TCP is reliable. It has a three-way handshake, where UDP is not very reliable. It\'s an unreliable protocol because there is no three-way handshake. TCP is what we call connection-oriented or a connection-full protocol because we have that three-way handshake and the acknowledgements. But UDP is connectionless. It\'s a fire-and-forget method. I just start sending out information, and hopefully you\'re going to get it. TCP uses segment retransmission and flow control that\'s being handled through windowing, which we\'re going to talk about more in just a second. UDP, on the other hand, there is no retransmission and no windowing. With TCP, we have segmentation of our sequencing of all of our different segments. With UDP, there is no sequencing. Now, what this means is, as I send everything out, I\'m going to send it out in the proper order, from 1 to 100. I\'ll do this for both TCP and UDP. Now, if you miss some of those pieces, or they arrive in a different order because they take different paths over the network, with TCP, they\'re sequencing, so it knows that you have 1 to 1,000, and it puts them back in the right sequence. With UDP, whatever they come in as, that\'s how it\'s going to broadcast it. And so it can be coming in, 1, 50, 2, 500, 3, 4, 5, 6, 20, in any random order like that. And that\'s how you\'re going to hear it. So with video, you may hear a little bit of jumpiness or a little bit of high-pitch squeaks or something like that because one of those frames may have come out of order. Now, when we go back to TCP, it is going to acknowledge each of those segments. And so we have acknowledgement. If I don\'t get it, I know that I didn\'t get it and I can get it retransmitted to me and then get it again. With UDP, there is no acknowledgement. So, again, UDP has a lot less overhead because there\'s no connection, no windowing, no retransmission, no sequencing, and no acknowledgement. Now, if you have to get something there and you want to make sure the person got it, you really have to use TCP as your protocol of choice. And that\'s why we really are going to use TCP for things like banking and websites and e-commerce and things like that. But if we have something that has a lot of data like audio or video streaming, UDP really does well with that because we don\'t need to get every single piece of that file. We can skip a little bit here and there, and that\'s okay. Now, earlier in the lesson, I mentioned a concept known as windowing, and I said we\'d get to it later. Well, here we are. We\'re going to talk about windowing. Now, what is windowing? Windowing is going to allow the clients to adjust the amount of data in each segment as it goes through the transmission. This way, we can continually adjust to either send more or fewer pieces of data for each segment that\'s being transmitted. So the whole idea here with windowing is that if you\'re sending data and you\'re getting a lot of retransmissions, well, you might be sending too much information. So you need to back that down and close the window a little bit, so you\'ll send less each time. Now, if you\'re not getting any retransmissions, it means you\'re probably not going fast enough. So, instead, we can open up that window and send more data with each of those segments. And then if we start getting a lot more of those retransmissions happening again, well, we start closing that window down. And so always we\'re opening and closing the window to maximize our throughput and our bandwidth here. So if you ever copied a file over a network on a Windows machine, you\'ve probably seen where it starts that movie file and it starts saying, \"Hey, you have 20 minutes remaining,\" and then it drops down to five minutes remaining, then it jumps up to 50 minutes remaining, and then 30 minutes remaining, and then an hour, and then it goes down to three minutes. And it has a really hard time estimating how long it\'s going to take to move that really large file off of your shared drive and onto your Windows computer. Now, why is that? Well, that\'s windowing at work. What\'s happening here is that as there\'s issues on the network and there\'s a lot more retransmissions, the window decreases, becomes smaller. And that means we have to send more segments to get all that data across, which takes more time. Now, as things go better and the network starts flowing again, that window opens up and we can send less segments with more data each time, and that\'s going to end up decreasing the time or making it go down. So what happens here is, as you can see on the screen, let\'s say that little green thing is what I\'m sending, and I start sending the information over. But that red starts creeping up to where we start to not being able to keep up with it. So we\'ll come back down, and then the red can creep up again, and then we\'ll come back down, and we\'ll keep doing that. Hopefully, the red and the green here will eventually match at a higher level than it was starting with. And so as we open up that window and close that window, we can start out slow and then we can go faster by opening up that window and faster and faster until we have problems, and then we\'ll start closing it down again. And we\'ll keep doing that over and over and over again until we get the best bandwidth we can, as we try to push as much data as we can. So, for example, if I start counting numbers to you, I\'m going to start going slow. One, two, three. That\'s pretty slow, right? You\'ll say, \"Okay, okay, I got it, Jason. \"You can go faster.\" So I\'ll start talking faster. One, two, three, four, five. \"Okay, that\'s still good. \"Let\'s try again.\" One, two, three, four, five. \"Oh, wait, wait, wait, wait. \"That\'s too fast, Jason.\" Okay, let me slow down. One, two, three, four, five. You got it, okay? And we keep doing that. That\'s the idea of windowing. I\'ll speed up and I\'ll slow down until you don\'t have any errors copying down what I\'m saying. That\'s the whole idea here. And we want to be able to send as much information to you as quickly as possible with the least amount of retransmissions but still getting the maximum throughput. Now, the next concept we want to talk about is buffering. If you\'ve ever watched an online video, you\'ve probably dealt with buffering before. Now, devices such as routers have a special memory in them that will store segments if the bandwidth isn\'t readily available. Now, this is called the buffer. So when it becomes available, it\'ll go ahead and start transmitting out the contents of that buffer and clear itself out. The same thing happens when you try to load a video. If the network is congested, it will take in a lot of information at first in anticipation of the fact that you\'re going to watch it faster than you\'re going to be able to download the rest of the video. And so this is the idea with our buffer in our routers as well. So if the buffer is going to overflow, though, and on a router you only have so much space and you start putting too much information in there because you can\'t send it out, then what\'s going to end up happening? You run out of memory. And when you run out of memory, the segments will drop. So let\'s look at an example of how buffering is going to work. Let\'s take a look at the buffer on the router. And here we have Router 4 in our diagram. Notice how it\'s kind of the central point of this diagram. Now, I have stuff coming into it from router number six and router number one and router number three. So if I look at all of those, there\'s 100 megabits, 100 megabits, and 10 megabits. That\'s a possibility of 210 megabits per second of information going into Router 4. Now, if it needs to send that information out to Router 5 and there\'s only a 50-megabit connection, you can see pretty quickly that there\'s going to be a bottleneck here. Now, what\'s going to end up happening for us is that Router 4 is going to have to catch all that extra information in its buffer. And when it has more availability, it\'ll send that information out to Router 5 and clear its buffer. You may ask, \"Why would we design a network this way?\" Well, often what\'s happening here is there isn\'t necessarily going to be 100% utilization from Router 4 to Router 5. Maybe that\'s our exterior WAN connection going out to the internet. In fact, Router 1 may only be sending you 10 or 15 megabits per second right now, and Router 3 might be sending you 30, and Router 6 might be sending you one, which is all, added together, less than 50, so no buffering would occur. Now, there\'s also the possibility that Router 1 and Router 3 sends us more information, and that can cause buffering to occur. Because Router 4 can\'t send enough data out to Router 5 over that 50-megabit-per-second connection. So this is the idea. We want to buffer things and hold it, and then as we have room, we can send that information out, clear the buffer, and keep on moving. Because, when we look at your networks, the chances are not every device is communicating to 100% of its capability all of the time. And so by doing this, we can pay for a smaller connection to the outside world, that 50-megabit connection, as opposed to paying for a large and expensive fiber connection of one gigabit per second. So we can keep our costs down by knowing what our utilization is over our network for the long period of time. Now, that gets into some more advanced concepts that, as you work as a network engineer, you\'ll start working on those designs. And as bandwidth keeps getting cheaper, it becomes less and less important for us, at least in the small office, home-office environment. But in large corporations, this is a big deal. Now, what are some examples of Layer 4 devices? Well, we have TCP and UDP as our protocols for Layer 4. So if you see TCP and UDP, you know you\'re dealing with Layer 4. We also have things like WAN accelerators where we try to add compression to our IP packets, and then we send those segments over through those WAN accelerators to get them through our network faster. We also have load balancers and firewalls that can operate at Layer 4 by blocking and allowing different ports and protocols to go through them. For example, if you\'ve ever gone in your firewall and blocked a port like Port 80 over TCP, well, that is a Layer 4 block because you\'re blocking the port, Port 80, [which is web traffic,] and the protocol, TCP, which is our protocol at Layer 4. Layer five, the session layer. Now layer five is our session layer and we want to start thinking about what a session is. The way I like to think about a session is that it\'s a conversation that has to be kept separate from all of the others to prevent the intermingling of data. Now, if you think about yourself in a classroom setting, we might have 20 students sitting here. Now, if I wanted to ask a question to one student only everyone else can listen to it. Maybe I want to take that student and we\'re going to go walk out into the hallway so we can have our own session, our own private conversation, while all of the other 19 students can talk among themselves and they\'re having their own session. That way we can separate these two sessions apart and each of us can talk at the same time without interfering with the other, me and my student out in the hall and all the other 19 students sitting back in the classroom. Now, that\'s the idea here when we talk about the session layer, we have all sorts of data flying around our networks all day long, and by establishing these sessions, we can separate them to prevent the intermingling or cross-contamination of that data. Now what we\'re going to do is we\'re going to set up a session. We\'re going to maintain a session, and we\'re going to tear down a session, here inside the session layer. Now, what is setting up a session? This is where we\'re going to check our user credentials and we\'re going to assign a number to the session to help identify it. It\'s basically some random number that\'s going to allow us to negotiate services for that session with the server and then negotiate who\'s going to talk first, either the server or myself. Now, let\'s go back to our classroom setting once more. If Johnny asked me a question, he says, \"Hey, professor Dion.\" I\'ll say, \"Yes, Johnny?\" And then guess what? We\'re going to start talking because we\'ve established that session. I know that Johnny wants to talk to me, and Johnny knows that I\'m ready to talk to him. Now I know who he is, he knows who I am, and we both know what each other wants. We want to start talking. In a computer network, it\'s a little bit more difficult, but that\'s really the idea behind it when we start talking about a session. Now, the next thing we have to do is we have to maintain this session. This is where we\'re going to transfer data back and forth across the network, over and over and over again. Going back to my classroom example, if Johnny asked me a question, I\'d say, \"Yes, go ahead and ask the question, Johnny.\" And then Johnny\'s going to ask his question. And at this point, we\'re maintaining a session. So here we are going back and forth, back and forth where I can answer their question and see if I can answer everything they have for me, right? Well, the same idea happens here inside our networks. Now that the session has been established, we\'re going to send all of our data back and forth over and over again. Now, if we have a break in the connection, we\'re going to have to reestablish our connection. So for instance, I might say, \"I\'m sorry, I didn\'t hear you, Johnny. \"Can you repeat your question?\" Again, I\'m maintaining that session so that way I get what they want to tell me, and then I can answer them. Now also, I\'m going to acknowledge the receipt of the data. I\'ll say, did you understand my answer, Johnny? And they\'ll say, \"Yeah, I did,\" or \"No, I didn\'t.\" And if they did, they\'ll acknowledge the receipt. If they didn\'t, they\'re going to tell me \"No, I didn\'t understand. \"Can you tell me that again a different way?\" Well, the same thing happens digitally with your networks. And that brings us to the final area, tearing down a session. So now that the student\'s question has been answered in our analogy of the classroom, I\'ll say, \"Johnny, does that answer your question?\" And hopefully he\'ll say, \"Yes, it does.\" And at that point I\'ll say, \"Okay, we\'re going to move on to the next thing in the course.\" And as I say that, we\'re going to move on to the next question that tears down the session. The session between me and Johnny is done, and I\'m going to go back to teaching. And if Johnny has another question, he\'s got to raise his hand again so that I can go and say, \"Okay, Johnny, what\'s your question?\" And we\'ll start a new session, right? That would be the end of that particular session, once I answer his question and got confirmation that he has it, then we can move on. Now, this is done based on this mutual agreement. Once we\'ve transferred all the data back and forth we wanted to, we\'re going to verify that I\'m ready to tear down the session. You\'re ready for the session to be torn down, and then we can tear down that session. The other way we could take down a session is if one of the parties disconnects and we simply can\'t reconnect to them. For instance, let\'s go back to my classroom example. Johnny asked a question, I go on a 30-minute diatribe trying to explain every possible thing to answer his question. And I look up and guess what? Johnny has his head down on the table and he\'s fallen asleep. I bored him to death. There\'s no way for me to maintain that session because he\'s not getting the information. He\'s completely asleep, he is dead asleep, and he is not listening anymore. Well, he has disconnected from this conversation, and therefore I can\'t re-engage, so I\'m just going to stop teaching and move on to other students who are awake and are paying attention and actually have questions for me. That\'s the idea here with tearing down a session. This can be done either mutually, where we both have finished the communication and we say, \"Yep, I\'m done. \"You\'re done, we understand each other,\" we move on. Or the other party simply disappears, in which case, I want to free up my resources so I can go help other people again. And that\'s what I would do as a computer. So what are some examples of layer five devices or protocols or things that you should associate with layer five? Well, the two big ones are H.323 and NetBIOS. Let\'s talk about H.323 first. H.323 is used to set up, maintain, and tear down voice and video connections. If you\'re using FaceTime or Skype or something like that, you\'re probably using something like H.323. These operate over the real time transport protocol, known as RTP. Anytime you see RTP, I want you to think about streaming audio or streaming video, generally in a two-way format, like a phone call or a FaceTime session. Keep those in your head and you\'re going to do great on the exam because they do like to throw those in, and I want you to see those. Now, the second thing I mentioned is that we have something known as NetBIOS. Now, NetBIOS is used by computers to share files over a network. Windows uses this as a method of its file sharing as well inside. So you\'re going to see it a lot on your networks. Now, again, when we talk about layer five devices, there\'s really not a device necessarily, but it\'s more these protocols and software. So if you see something like H.323, RTP, [or NetBIOS, I want you to remember] these are all layer five issues. Layer Six, The Presentation Layer. Now the presentation layer is responsible for formatting your data that\'s going to be exchanged and securing the data with proper encryption. So when you think about layer six, there\'s a couple of keywords I want you to remember. Data formatting and encryption. Every time we talk about data formatting or encryption, I want you to think about layer six, the presentation layer inside the OSI model. Now when I talk about data formatting, what is that? Well, data is going to be formatted by a computer so that it has compatibility between different types of devices. There are some common ones out there that you may or may not have heard of. The first one is ASCII, which stands for the American Standard Code for Information Interchange. Now, ASCII is basically text. It\'s what says that the capital A is represented by the number 65. In the old days of computers, we had ASCII, and it was an eight bit code that would tell us exactly what that letter was or that symbol, and ASCII characters do that for us. For instance, ASCII character 40 is the @ sign for your email. Then we have things like gifs, which are little pictures that dance around and do motion, or we have a JPEG file, which is used for photographs or a PNG, which is used for images on the internet. All of these are different types of data formats, and based on that format, the computer knows how to represent it on the screen as it passes those files around. Because at the end of the day, all of these file formats really come down to a series of ones and zeros on your computer\'s hard drive or on your network. Now when we talk about ASCII, it really is just a text-based language for us to use. This is going to ensure that the data is readable by the receiving system because now we\'re all speaking the exact same language because ASCII isn\'t the only text-based language out there. There\'s Unicode and a series of other ones. But if we all use ASCII for instance, that means we all know what the proper data structure is for it and the proper formatting. This allows us to negotiate data transfer syntax for our layer seven at the application layer, and we\'re going to talk about that in the next lesson. Now the next piece of this layer, of layer six is encryption. This is another thing that the presentation layer is going to be all about in doing for us. Encryption is used to scramble data as it goes in transit to keep it secure from any prying eyes. This is going to provide us with confidentiality of our data as it crosses our network and as it\'s stored. Now some examples of this will be something like TLS, which stands for Transport Layer Security, and that\'s what\'s being used right now to secure the data between your computer and a website like Facebook or Dion Training or Amazon or any of the other ones out there. If you get that little padlock next to your domain name, that means you\'re using a TLS connection. This creates an encrypted tunnel so nobody else can see what\'s inside it, and that way they can\'t see your username, your password, and your credit card information as you pass it back and forth. Now what are some examples of things at layer six? Well, things like scripting languages are going to be a layer six thing because they\'re formatting data. So if we\'re dealing with HTML or XML or PHP or JavaScript, all of those tell the text-based ASCII how it should display differently on the screen. For example, I want to make this line bold and make this other one underlined and make this size X font. Those type of things can all be controlled using a formatting language like HTML to take the ASCII code that you typed in and translate into something that looks better on the screen. We also have things like standard text, and this would be things like ASCII or Unicode or EBCDIC. All of these are different ways of displaying text from those ones and zeros. We also have things like pictures. We have GIFs and JPEGs and TIFs and SVGs and PNGs, and all of these type of files. They\'re all different ways of representing ones and zeros as different graphical formats. We also have movie files like MP4s and MPEGs and MOV files, and all of these are going to show up as some kind of video for us, but they\'re all made up of these ones and zeros in a particular format, and that data formatting makes it into a movie that you can watch just like this video right now. So all of those examples I showed you so far are all about presentation, presenting text or images or photographs or movies in a different way. But we also have that example of encryption algorithms, things like TLS and SSL. Now these are taking your ones and zeros and presenting them differently, right? Because they\'re scrambling them up so nobody else can see them, and it secures that data in a jumbled format. That\'s the idea here. When you start dealing with TLS and SSL. So it is a type of data format, [but it\'s not just a clear presentation] like we had with a movie file or a graphic file or an audio file or text. Instead, when we start dealing with encryption, we\'re really focused on the security of the data by scrambling it up and keeping prying eyes off it. We\'ve finally made it. We\'ve gotten to layer 7, the application layer. We\'ve gone through seven different layers of the OSI model, starting all the way down with physical, then going up to data link, network, transport, session, presentation, and now we\'re here at application. The application layer is going to provide all of your application-level services, which makes sense since it\'s the application layer. But I don\'t want you to think about an application like Internet Explorer or Chrome or Word or PowerPoint or Notepad. That\'s not the kind of application we\'re talking about. We\'re actually going to be talking about more lower-level applications. When we talk about applications in the OSI context, we\'re really talking about things like file transfer or network transfer. This is the layer where the user is going to communicate with the computer and the computer can then take that information and pass it across the network. These are functions like application services and service advertisement, and we\'re going to talk about those as we go through this short lecture. Application services are the thing that unites communicating components for more than one network application. If I have a file transfer and file sharing, email, remote access, network management activities, and client server processes, these are all different types of application services. Now, again, I want to caution you. When I talk about email, I\'m not talking about Microsoft Outlook. Instead, I\'m talking about the low-level protocol that\'s used by email. Things like POP3 or Post Office Protocol 3, or Internet Message Access Protocol, an IMAP, or something like SMTP, the Simple Mail Transfer Protocol, things of this nature. And we\'re going to talk about these as we go through our ports and protocols in a future lesson when you have to memorize the different port numbers for each of these three types of services. Now, the other thing we want to talk about is service advertisement. This is where applications can send out announcements to other devices on the network, and they can say, \"These are the different services that I offer.\" If you have something like a printer that is essentially managed by your active directory or a file server that\'s managed by active directory, it can do those advertisements for you. If not though, your files and your printers can actually advertise for themselves. And that\'s the idea of a service advertisement. For example, let\'s say you have a nice wireless printer that\'s on your network. Anytime you connect to the wireless network, it actually sits out there and the printer goes, \"Hey, brand new device that I don\'t know that just joined my network, I\'m a printer. And guess what? You can use me to print things because I\'m a printer.\" And that\'s what it does, it advertises itself. And all this is done under this concept of service advertisement here in layer 7. Now, when we talk about layer 7, what are some examples of layer 7 things? Well, these are email applications like POP3, IMAP, and SMTP. This would be web browsing applications like HTTP or HTTPS. This could be things like DNS, the Domain Name Service, which is going to translate our names to numbers and our numbers to names. It can be things like File Transfer Protocols like FTP and FTPS and SFTP. It could be things like remote access like Telnet and SSH and Simple Network Management Protocol or SNMP. All of these things are layer 7. And if it sounds like I just threw out a lot of acronyms to you, I know I did. Don\'t worry about it. We are going to talk about each and every one of those as we go through the future lessons in this course. So just hang with me, and by the end of this, [you are going to know all of those like the back of your hand.] You\'re going to know what the acronyms are, what they stand for, and what ports they operate on, because that is going to be important for the exam as well. Encapsulation and decapsulation. In this lesson, we\'re going to talk about encapsulation and decapsulation. Encapsulation is the process of putting headers and sometimes trailers around some of our data. Think about it like this. You just finished writing a letter to your grandma and now you want to send it to her. Well, to do that, you need to put it in an envelope. Now, when you put the letter in the envelope, you\'re actually encapsulating it. Now, once your grandma gets that envelope, she wants to be able to read it, and so she has to take the letter out of the envelope in order for her to read it. This process is known as decapsulation because we\'re removing the encapsulation that was applied earlier. Now, I know this is a silly example, but that\'s exactly what happens as we send data on our networks. It\'s continually being encapsulated and decapsulated as it moves up or down the layers of the OSI model. If we move down the OSI layers from seven to one, we encapsulate our data. If we move upward from layer one up to seven, we decapsulate our data. So let\'s take a closer look at how this works in the real world. In the OSI model, we use protocol data units or PDUs to transmit our data. A protocol data unit is just a single unit of information transmit within a computer network. In the OSI model, they\'re simply called L, the layer number and PDU, for example, L7 PDU is a layer 7 PDU. This type of terminology can be used for every single layer we have, but we also have special names for the PDUs when we each layers 1, 2, 3, and 4. For layer one, we call them bits. For layer two, we call them frames. For layer three, we call it packets. And layer four, we call it segments if we\'re using TCP, or datagrams if we\'re using UDP. Now as a user creates data and they want to send it over a network, they\'re going to enter it into an application at the application layer, layer seven. This data then has a layer seven header added that contains metadata with the parameters that are agreed upon by the specific application. So if you\'re using HTTP or you\'re using FTP, that\'s going to have specific metadata for that type of data. Then that information is going to be passed down to layer six, where it\'s going to encapsulate the layer seven header and data together, and then add its own layer six header, which contains its own metadata with information about the presentation or encryption formats being used. Next, it\'s going to pass this down to layer five, where it encapsulates the layer six header and the layer six data, and then it\'s going to add its own layer five header based on the metadata about the session. As you can see, it\'s like taking a letter, or in this case data, wrapping it in envelope, and then writing some information on that envelope. That\'s our header. When we hand it to the next person, they\'re going to put it in an envelope and write their own metadata on the outside of the envelope and then pass it to the next person. And we keep doing this as we go down the layers all the way down to layer one. Now, the headers added at layers 4, 3, 2, and 1 are very specific, and they actually help ensure the message is going to reach its final destination. So let\'s take a look at the header that\'s added layer four, the transport layer. Now, if you remember, the transport layer uses different protocols like TCP or UDP. The TCP header has 10 mandatory fields, totaling 20 bytes of information. This includes our source port, the destination port, the sequence number, the acknowledgement number, the TCP data offset, the reserve data, which is currently always going to be set to zero because it\'s not really used. The control flags, the window size, the TCP checksum, the urgent pointer, and the mTCP optional data. Now, you don\'t need to know all these fields in depth, but there are a couple that are pretty important. For example, the source and destination ports are pretty important to understand because this helps determine where the information is being sent from and where it\'s being sent to and allows it to go through a firewall by going to the right ports. Also, the sequence number and acknowledgement numbers are going to be used to ensure all the data is properly received by the destination when it\'s sent by the original transmitter. So this is also important when you\'re using TCP. Another important concept in the TCP header is the control flags. There are six control flags that are used to manage data flow before, during, and to stop the data communication when you\'re finished. You should already be familiar with the three-way handshake. That uses the syn packet sent by the client, the syn ack packet that\'s sent by the server and the ack packet that the client sends back to the server at the end. These packets are sent using the TCP flags of syn or ack inside your TCP header. Now, in addition to these syn and ack flags, there\'s also four others: Fin, reset, push, and urgent. First, we have the syn flag or synchronization flag. This is by far the most well-known flag in TCP communications because it\'s used to synchronize the connection during the three-way handshake. Next, we have the ack or acknowledgement flag. This is also used during the three-way handshake, but in addition to that, we use it to acknowledge the successful receipt of all the packets during the communication. The fin or finished packet is used to tear down the virtual connection created by the three-way handshake and the syn flag. The fin flag always appears when the last packets are exchanged between a client and a server, and the host is now ready to shut down that connection. Next, we have the RST flag or reset flag. This is going to be used when a client or server receives a packet that it was not expecting during the current connection. For example, if you tried to establish a connection with a server that didn\'t want to accept any new connections, it could send back an RST or reset flag to inform your client that is not accepting connections and automatically reject your request. The next one we have is a PSH flag or push. Now, a push flag is used to ensure the data is given priority and is processed at the sending or receiving ends. Most often, this flag is added to a packet at the beginning or end of a data transfer. The final flag we have is URG or the urgent flag. The urgent flag is like the push flag, and it identifies incoming data as urgent. Now, the main difference here between push and urgent is that push is used by the sender to indicate data with a higher priority level. Now, urgent, on the other hand, is sent to tell the recipient to process it immediately and ignore anything else that\'s in the queue. With urgent, this could lead to packets violating the first in first out priority order. So it needs to be used only by particular applications when necessary. Now, if you\'re using UDP instead of TCP, you\'re going to be using the user datagram protocol. Now we look at the user datagram protocol header. This is another transport layer or layer four header that\'s going to be used in our networks. Remember, UDP is unreliable, and it\'s a connectionless protocol, so its header is significantly smaller than TCP. With UDP, we only have an eight byte header. Instead of the 20 byte header used in TCP. UDP only has four fields that are used. The source port, the destination port, the length, and the checksum. The source and destination ports are just like the ones used in TCP. They dictate where the data is coming from and where it\'s going to. The length is used to indicate how many bytes the total UDP packet is, including the header and its data. The checksum is not a mandatory field, but instead it can be used to provide some validation that the UDP data being sent was actually received with some level of integrity. Next, let\'s move down to layer three, the network layer. As we move down another layer, we\'re going to again encapsulate the data and add a header. This time we\'re going to add the IP or internet protocol header. The IP header is going to contain several fields, including the IP version, the length of the IP header, the type of service, which was defined by the standard, but never really used. The total length of the packet and header, the identifier, the flags, the fragmented offset, the time to live, the protocol, the header checksum, the source IP, the destination IP, and the options and padding. Now, as we continue down the layers, we\'re going to reach layer two, the data link layer, and this is going to encapsulate the data by adding an ethernet header. Now, this header features just a few things, including a destination MAC address, the source MAC address, the EtherType field, and an optional VLAN tag using either IEEE 802.1Q or IEEE 802.1AD. We\'re going to talk more about VLANs in a separate video though, because it\'s an important concept. So let\'s talk about a MAC address. A MAC address is a physical address that\'s used to identify a network card on your local area network. This allows our source to find our destination by using this type of layer two addressing. This is what\'s processed by switches in your network. Now, the EtherType field is used to indicate which protocol is encapsulated in the payload of that frame. So if you\'re using IPV4 or IPV6, this can be indicated here using the EtherType field. Now, in addition to the ethernet header, a frame being sent at layer two will also contain a payload. In ethernet, the minimum payload is 42 bytes if VLANs are being used and 46 bytes if no VLANs are being used. Now when you\'re trying to send a payload, there is a maximum size to this known as an MTU or maximum transmission unit. When we talk about payloads, this is the data we\'re trying to send across the network. By default, ethernet uses an MTU of 1500 bytes as its maximum size. Now, if you have a payload that\'s larger than 1500 bytes, then you need to allow for what\'s known as a jumbo frame. This just means the frame is going to be larger than 1500 bytes. Configure this on your switch, you\'re going to reconfigure your MTU size or your maximum transmission unit size to larger than 1500 bytes. Alright, that was a ton of information we just covered, but let\'s review a couple of main concepts here. First, remember, as data moves from layer seven to layer one, we are going to encapsulate that data. So as we move down the OSI layers, we\'re going to encapsulate that data and add a header at each of those layers. At layer four, we\'re going to add our source and destination ports. At layer three, we add our source and destination IP addresses. At layer two, we add our source and destination MAC addresses. Now, once we get to layer one, we\'re simply transmitting our layer two frames as a series of ones and zeros over the medium when it\'s received by the next device. For example, a switch, it\'s going to put the frames back together from the electrical, optical, or radio frequency signals that it received over layer one. Now it\'s going to decapsulate the layer two information by reading the ethernet header. If the destination MAC is on one of the switch ports, it\'s going to send the message to it. If not, it\'s going to forward it to its default gateway, which is a router. This router then decapsulates the data to layer three, and it reads the destination IP address. If it\'s on its network, it\'s going to forward the data to that device. If not, it\'s going to re-encapsulate the data and send it out its default gateway, and then this process will continue until the final destination or host is found. Now, once that host is found, it\'s going to keep decapsulating that information all the way back up to layer seven where it\'s application can read and understand the underlying data. Now, we\'re going to cover a lot more about how this data transfer happens when we talk about switches and routers later in this course. But for now, this is the basics you need to understand.

Use Quizgecko on...
Browser
Browser