🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Chapter 12 Network Security THE COMPTIA SECURITY+ EXAM OBJECTIVES COVERED IN THIS CHAPTER INCLUDE: Domain 1.0: General Security Concepts 1.2. Summarize fundamental security concepts. Zero Trust (Control plane (Adaptive identity, Threat scope reduction, Policy-driven access c...

Chapter 12 Network Security THE COMPTIA SECURITY+ EXAM OBJECTIVES COVERED IN THIS CHAPTER INCLUDE: Domain 1.0: General Security Concepts 1.2. Summarize fundamental security concepts. Zero Trust (Control plane (Adaptive identity, Threat scope reduction, Policy-driven access control, Policy Administrator, Policy Engine), Data Plane (Implicit trust zones, Subject/System, Policy Enforcement Point)) Deception and disruption technology (Honeypot, Honeynet, Honeyfile, Honeytoken) Domain 2.0: Threats, Vulnerabilities, and Mitigations 2.4. Given a scenario, analyze indicators of malicious activity. Network attacks (Distributed denial-of-service (DDoS) (Amplified, Reflected), Domain Name System (DNS) attacks, Wireless, On-path, Credential replay, Malicious code) 2.5. Explain the purpose of mitigation techniques used to secure the enterprise. Segmentation Access control (Access control list (ACL)) Domain 3.0: Security Architecture 3.1. Compare and contrast security implications of different architecture models. Architecture and infrastructure concepts (Network infrastructure (Physical isolation (Air-gapped), Logical segmentation, Software-defined networking (SDN)), High availability) 3.2. Given a scenario, apply security principles to secure enterprise infrastructure. Infrastructure considerations (Device placement, Security zones, Attack surface, Connectivity, Failure modes (Fail-open, Fail-closed), Device attribute (Active vs. passive, Inline vs. tap/monitor), Network appliances (Jump server, Proxy server, Intrusion prevention system (IPS)/Intrusion detection system (IDS), Load balancer, Sensors), Port security (802.1X, Extensible Authentication Protocol (EAP)), Firewall types (Web application firewall (WAF), Unified threat management (UTM), Next-generation firewall (NGFW), Layer 4/Layer 7)) Secure communications/access (Virtual private network (VPN), Remote access, Tunneling, (Transport Layer Security (TLS), Internet protocol security (IPSec)) Software-defined wide area network (SD- WAN). Secure access service edge (SASE)) Selection of effective controls Domain 4.0: Security Operations 4.1. Given a scenario, apply common security techniques to computing resources. Hardening targets (Switches, Routers) 4.4. Explain security alerting and monitoring concepts and tools. Tools (Simple Network Management Protocol (SNMP) traps) 4.5. Given a scenario, modify enterprise capabilities to enhance security. Firewall (Rules, Access lists, Ports/protocols, Screened subnets) IDS/IPS (Trends, Signatures) Web filter (Agent-based, Centralized proxy, Universal Resource Locator (URL) scanning, Content categorization, Block rules, Reputation) Implementation of secure protocols (Protocol selection, Port selection, Transport method) DNS filtering Email Security (Domain-based Message Authentication Reporting and Conformance (DMARC), DomainKeys Identified Mail (DKIM), Sender Policy Framework (SPF), Gateway) File integrity monitoring DLP Network Access Control (NAC) Networks are at the core of our organizations, transferring data and supporting the services that we rely on to conduct business. That makes them a key target for attackers and a crucial layer in defensive architecture and design. In this chapter, you will consider network attack surfaces and learn about important concepts and elements in secure network design such as network segmentation and separation into security zones based on risk and security requirements. You will also explore protective measures like port security, how to secure network traffic using VPNs and tunneling even when traversing untrusted networks, and how software-defined networks and other techniques can be used to build secure, resilient networks. Next, you will learn about network appliances and security devices. Jump servers, load balancers and their scheduling techniques and functional models, proxy servers, intrusion detection and prevention systems, and a variety of other network security tools and options make up the middle of this chapter. With appliances and devices and design concepts in mind, you will move on to how networks are managed and hardened at the administrative level using out-of-band management techniques, access control lists, and other controls. You will then explore DNS and DNS filtering. In addition, you will review secure protocols and their uses and learn how to identify secure protocols based on their service ports and protocols, what they can and can't do, and how they are frequently used in modern networks. Next, you will learn about modern techniques to secure email. Once you have covered tools and techniques, you will review common indicators of malicious activity related to networks, including attacks such as on-path attacks, DNS, layer 2, distributed denial-of-service (DDoS), and credential replay attacks and how they can be detected and sometimes stopped. Designing Secure Networks As a security professional, you must understand and be able to implement key elements of design and architecture found in enterprise networks in order to properly secure them. You need to know what tools and systems are deployed and why, as well as how they can be deployed as part of a layered security design. Selection of effective controls is a key component in securing networks and requires both an understanding of threats and the controls that can address them. Exam Note The Security+ exam doesn't delve deeply into the underlying components of networking and instead focuses on implementing designs and explaining the importance of security concepts and components. For this exam, focus on the items listed in the exam outline, and consider how you would implement them and why they're important. However, if you're not familiar with the basics of TCP/IP-based networks, you may want to spend some time familiarizing yourself with them in addition to the contents of this chapter as you progress in your career. Although you're unlikely to be asked to explain a three-way handshake for TCP traffic on the Security+ exam, if you don't know what the handshake is, you'll be missing important tools in your security toolkit. Many security analysts take both the Network+ and the Security+ exam for that reason. Security designs in most environments have historically relied on the concept of defense-in-depth. In other words, they are built around multiple controls designed to ensure that a failure in a single control —or even multiple controls—is unlikely to cause a security breach. As you study for the exam, consider how you would build an effective defense-in-depth design using these components and how you would implement them to ensure that a failure or mistake would not expose your organization to greater risk. With defense-in-depth in mind, it helps to understand that networks are also built in layers. The Open Systems Interconnection (OSI) model is used to conceptually describe how devices and software operate together through networks. Beyond the conceptual level, designers will create security zones using devices that separate networks at trust boundaries and will deploy protections appropriate to both the threats and security requirements of each security zone. Networking Concepts: The OSI Model The OSI model describes networks using seven layers, as shown in the following graphic. Although you don't need to memorize the OSI model for the Security+ exam, it does help to put services and protocols into a conceptual model. You will also frequently encounter OSI model references, like “Layer 3 firewalls” or “That's a Layer 1 issue.” The OSI model is made up of seven layers, typically divided into two groups: the host layers and the media layers. Layers 1–3, the Physical, Data Link, and Network layers, are considered media layers and are used to transmit the bits that make up network traffic, to transmit frames or logical groups of bits, and to make networks of systems or devices work properly using addressing, routing, and traffic control schemes. Layers 4–7, the host layers address things like reliable data transmission, session management, encryption, and translation of data from the application to the network and back works, and that APIs and other high-level tools work. As you think about network design, remember that this is a logical model that can help you think about where network devices and security tools interact within the layers. You should also consider the layer at which a protocol or technology operates and what that means for what it can see and impact. Infrastructure Considerations As you prepare for the Security+ exam, you'll want to understand a number of infrastructure design considerations. These considerations can influence how organizations design their infrastructure by impacting cost, manageability, functionality, and availability. The organization's attack surface, or a device's attack surface, consists of the points at which an unauthorized user could gain access. This includes services, management interfaces, and any other means that attackers could obtain access to or disrupt the organization. Understanding an organization's attack surface is a key part of security and infrastructure design. Device placement is a key concern. Devices may be placed to secure a specific zone or network segment, to allow them to access traffic from a network segment, VLAN, or broader network, or may be placed due to capabilities like maximum throughput. Common placement options include at network borders, datacenter borders, and between network segments and VLANs, but devices may also be placed to protect specific infrastructure. Security zones are frequently related to device placement. Security zones are network segments, physical or virtual network segments, or other components of an infrastructure that are able to be separate from less secure zones through logical or physical means. Security zones are typically created on trust or data sensitivity boundaries but may be created for any security-related purpose that an organization may feel is important enough to create a distinct zone. Common examples include segregated guest networks, Internet-facing networks used to host web servers and other Internet accessible services, and management VLANs, which are used to access network device management ports for switches, access points, routers, and similar devices. Connectivity considerations include how the organization connects to the Internet, whether it has redundant connections, how fast the connections are, what security controls the upstream connectivity provider can make available, what type of connectivity is in use— whether it is fiber optic, copper, wireless, or some other form of connectivity, and even if the connection paths are physically separated and using different connectivity providers so that a single event cannot disrupt the organization's connectivity. Another consideration are failure modes and how the organization wants to address them. When a security device fails, should it fail so that no traffic passes it or should it fail so that all traffic passes it? These two states are known as fail-closed and fail-open, and the choice between the two is tightly linked to the organization's business objectives. If a fail-closed device fails and the organization cannot conduct business, is that a greater risk than the device failing open and the organization not having the security controls it provides available? Network taps, which are devices used to monitor or access traffic, may be active or passive. That means they're either powered or not powered. Passive devices can't lose power, and thus a potential failure mode is removed. They can also be in line, with all traffic flowing through them, or they can be set up as taps or monitors, which copy traffic instead of interacting with the original network traffic. Network Design Concepts Much like the infrastructure design concepts you just explored, the Security+ exam outline includes a number of network design concepts and terms that you'll need to know as you prepare for the exam. Physical Isolation Physical isolation is the idea of separating devices so that there is no connection between them. This is commonly known as an air- gapped design because there is “air” between them instead of a network cable or network connection. Physical isolation requires physical presence to move data between air-gapped systems, preventing remote attackers from bridging the gap. When Air Gaps Fail The Stuxnet malware was designed to overcome air-gapped nuclear facility security by copying itself to removable devices. That meant that when technicians plugged their infected USB drives into systems that were protected by physical isolation, the malware was able to attack its intended targets despite the protection that an air-gapped design offers. Like any other control, physical separation only addresses some security concerns and needs to be implemented with effective controls to ensure that other bad practices do not bypass the separation. Logical Segmentation Logical segmentation is done using software or settings rather than a physical separation using different devices. Virtual local area networks (VLANs) are a common method of providing logical segmentation. VLAN tags are applied to packets that are part of a VLAN, and systems on that virtual network segment see them like they would a physical network segment. Attacks against logical segmentation attempt to bypass the software controls that separate traffic to allow traffic to be sent or received from other segments. High Availability High availability (HA) is the ability of a service, system, network, or other element of infrastructure to be consistently available without downtime. High availability designs will typically allow for upgrades, patching, system or service failures, changes in load, and other events without interruption of services. High availability targets are set when systems are designed and then various solutions like clustering, load balancing, and proxies are used to ensure that the solution reaches availability targets. HA design typically focuses on the ability to reliably switch between components as needed, elimination of single points of failure that could cause an outage, and the ability to detect and remediate or work around a failure if it does happen. Implementing Secure Protocols Implementation of secure protocols is a common part of ensuring that communications and services are secure. Examples of secure protocols are the use of HTTPS (TLS) instead of unencrypted HTTP, using SSH instead of Telnet, and wrapping other services using TLS. Protocol selection for most organizations will default to using the secure protocol if it exists and is supported, and security analysts will typically note the use of insecure protocols as a risk if they are discovered. Port selection is related to this, although default ports for protocols are normally preselected. Some protocols use different ports for the secure version of the protocol, like the use of port 443 for HTTPS instead of 80 for HTTP. Others like Microsoft's SQL Server use the same port and rely on client requests for TLS connections on TCP 1433. Finally, selection of the transport method, including protocol versions, is important when selecting secure protocols. Downgrade attacks and simply using insecure versions of protocols can lead to data exposures, so selecting and requiring appropriate versions of protocols like TLS is an important configuration decision. Security Through Obscurity and Port Selection Some organizations choose to run services on alternate ports to decrease the amount of scanning traffic that reaches the system. While running on an alternate port can somewhat decrease specialized scan traffic, many port scans will attempt to scan all available ports and will still discover the service. Is using an alternate port the right answer? In some cases, it may make sense for an organization to do so, or it may be required because another service is already running that uses the default port for a service like TCP 80 for a web server. In general, however, using an alternate port is not a typical security control. Reputation Services Reputation describes services and data feeds that track IP addresses, domains, and hosts that engage in malicious activity. Reputation services allow organizations to monitor or block potentially malicious actors and systems, and reputation systems are often combined with threat feeds and log monitoring to provide better insight into potential attacks. Software-Defined Networking Software-defined networking (SDN) uses software-based network configuration to control networks. SDN designs rely on controllers that manage network devices and configurations, centrally managing the software-defined network. This allows networks to be dynamically tuned based on performance metrics and other configuration settings, and to be customized as needed in a flexible way. SDN can be leveraged as part of security infrastructure by allowing dynamic configuration of security zones to add systems based on authorization or to remove or isolate systems when they need to be quarantined. SD-WAN Software-defined wide area network (SD-WAN) is a virtual wide area network design that can combine multiple connectivity services for organizations. SD-WAN is commonly used with technologies like Multiprotocol Label Switching (MPLS), 4G and 5G, and broadband networks. SD-WAN can help by providing high availability and allowing for networks to route traffic based on application requirements while controlling costs by using less expensive connection methods when possible. While MPLS isn't in the main body of the Security+ exam outline, it still shows up in the acronym list, and you'll encounter it in real-world scenarios where SD-WAN is used. MPLS uses data labels called a forwarding equivalence class rather than network addresses, and labels establish paths between endpoints. A FEC for voice or video would use low- latency paths to ensure real-time traffic delivery. Email would operate in a best effort class since it is less susceptible to delays and higher latency. MPLS is typically a more expensive type of connection, and many organizations are moving away from MPLS using SD-WAN technology. Secure Access Service Edge Secure Access Service Edge (SASE; pronounced “sassy”) combines virtual private networks, SD-WAN, and cloud-based security tools like firewalls, cloud access security brokers (CASBs), and zero-trust networks to provide secure access for devices regardless of their location. SASE is deployed to ensure that endpoints are secure, that data is secure in transit, and that policy-based security is delivered as intended across an organization's infrastructure and services. We'll cover VPNs and Zero Trust more later in the chapter, but cloud access security brokers are only mentioned in the acronym list of the Security+ exam outline. A CASB is a policy enforcement point used between service providers and service consumers to allow organizations to enforce their policies for cloud resources. They may be deployed on-premises or in the cloud, and help organizations ensure policies are enforced on the complex array of cloud services that their users access while also providing visibility and other features. As you review the rest of the chapter, you'll want to consider how and where these concepts can be applied to the overall secure network design elements and models that you're reviewing. Network Segmentation One of the most common concepts in network security is the idea of network segmentation. Network segmentation divides a network into logical or physical groupings that are frequently based on trust boundaries, functional requirements, or other reasons that help an organization apply controls or assist with functionality. A number of technologies and concepts are used for segmentation, but one of the most common is the use of virtual local area networks (VLANs). A VLAN sets up a broadcast domain that is segmented at the Data Link layer. Switches or other devices are used to create a VLAN using VLAN tags, allowing different ports across multiple network devices like switches to all be part of the same VLAN without other systems plugged into those switches being in the same broadcast domain. A broadcast domain is a segment of a network in which all the devices or systems can reach one another via packets sent as a broadcast at the Data Link layer. Broadcasts are sent to all machines on a network, so limiting the broadcast domain makes networks less noisy by limiting the number of devices that are able to broadcast to one another. Broadcasts don't cross boundaries between networks—if your computer sends a broadcast, only those systems in the same broadcast domain will see it. A number of network design concepts describe specific implementations of network segmentation: Screened subnets (often called DMZs, or demilitarized zones), are network zones that contain systems that are exposed to less trusted areas. Screened subnets are commonly used to contain web servers or other Internet-facing devices but can also describe internal purposes where trust levels are different. Intranets are internal networks set up to provide information to employees or other members of an organization, and they are typically protected from external access. Extranets are networks that are set up for external access, typically by partners or customers rather than the public at large. Although many network designs used to presume that threats would come from outside the security boundaries used to define network segments, the core concept of Zero Trust networks is that nobody is trusted, regardless of whether they are an internal or an external person or system. Therefore, Zero Trust networks should include security between systems as well as at security boundaries. They Left on a Packet Traveling East You may hear the term “east-west” traffic used to describe traffic flow in a datacenter. It helps to picture a network diagram with systems side by side in a datacenter and network connections between zones or groupings at the top and bottom. Traffic between systems in the same security zone move left and right between them—thus “east and west” as you would see on a map. This terminology is used to describe intrasystem communications, and monitoring east-west traffic can be challenging. Modern Zero Trust networks don't assume that system-to-system traffic will be trusted or harmless, and designing security solutions that can handle east-west traffic is an important part of security within network segments. Zero Trust Organizations are increasingly designing their networks and infrastructure using Zero Trust principles. Unlike traditional “moat and castle” or defense-in-depth designs, Zero Trust presumes that there is no trust boundary and no network edge. Instead, each action is validated when requested as part of a continuous authentication process and access is only allowed after policies are checked, including elements like identity, permissions, system configuration and security status, threat intelligence data review, and security posture. Figure 12.1 shows NIST's logical diagram of a Zero Trust architecture (ZTA). Note subject's use of a system that is untrusted connects through a Policy Enforcement Point, allowing trusted transactions to the enterprise resources. The Policy Engine makes policy decisions based on rules that are then acted on by Policy Administrators. FIGURE 12.1 NIST Zero Trust core trust logical components In the NIST model: Subjects are the users, services, or systems that request access or attempt to use rights. Policy Engines make policy decisions based on both rules and external systems like those shown in Figure 12.1: threat intelligence, identity management, and SIEM devices, to name a few. They use a trust algorithm that makes the decision to grant, deny, or revoke access to a given resource based on the factors used for input to the algorithm. Once a decision is made, it is logged and the policy administrator takes action based on the decision. Policy Administrators are not individuals. Rather they are components that establish or remove the communication path between subjects and resources, including creating session- specific authentication tokens or credentials as needed. In cases where access is denied, the policy administrator tells the policy enforcement point to end the session or connection. Policy Enforcement Points communicate with Policy Administrators to forward requests from subjects and to receive instruction from the policy administrators about connections to allow or end. While the Policy Enforcement Point is shown as a single logical element in Figure 12.1, it is commonly deployed with a local client or application and a gateway element that is part of the network path to services and resources. The Security+ exam outline focuses on two major Zero Trust planes that you should be aware of: the Control Plane and the Data Plane. The Control Plane is composed of four components: Adaptive identity (often called adaptive authentication), which leverages context-based authentication that considers data points such as where the user is logging in from, what device they are logging in from, and whether the device meets security and configuration requirements. Adaptive authentication methods may then request additional identity validation if requirements are not met or may decline authentication if policies do not allow for additional validation. Threat scope reduction, sometimes described as “limited blast radius,” is a key component in Zero Trust design. Limiting the scope of what a subject can do as well or what access is permitted to a resource limits what can go wrong if an issue does occur. Threat scope reduction relies on least privilege as well as identity-based network segmentation that is based on identity rather than tradition network-based segmentation based on things like an IP address, a network segment, or a VLAN. Policy-driven access control is a core concept for Zero Trust. Policy Engines rely on policies as they make decisions that are then enforced by the Policy Administrator and Policy Enforcement Points. The Policy Administrator, which as described in the NIST model, executes decisions made by a Policy Engine. The Data Plane includes: Implicit trust zones, which allow use and movement once a subject is authenticated by a Zero Trust Policy Engine Subjects and systems (subject/system), which are the devices and users that are seeking access Policy Enforcement Points, which match the NIST description You can read the NIST publication about zero trust at https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.S P.800-207.pdf. Exam Note The Security+ exam outline emphasizes infrastructure considerations like device placement, attack surface, failure modes, connectivity, security zones, and device attributes. It also considers network design concepts like physical isolation and logical segmentation as well as high availability, secure protocols, the role of reputation services, SDN, SD-WAN, SASE, and Zero Trust. Make sure you understand each of these concepts as you prepare for the exam. Network Access Control Network segmentation helps divide networks into logical security zones, but protecting networks from unauthorized access is also important. That's where network access control (NAC), sometimes called network admissions control, comes in. NAC technologies focus on determining whether a system or device should be allowed to connect to a network. If it passes the requirements set for admission, NAC places it into an appropriate zone. To accomplish this task, NAC can use a software agent that is installed on the computer to perform security checks. Or the process may be agentless and run from a browser or by another means without installing software locally. Capabilities vary, and software agents typically have a greater ability to determine the security state of a machine by validating patch levels, security settings, antivirus versions, and other settings and details before admitting a system to the network. Some NAC solutions also track user behavior, allowing for systems to be removed from the network if they engage in suspect behaviors. Since NAC has the ability to validate security status for systems, it can be an effective policy enforcement tool. If a system does not meet security objectives, or if it has an issue, the system can be placed into a quarantine network. There the system can be remediated and rechecked, or it can simply be prohibited from connecting to the network. NAC checks can occur before a device is allowed on the network (preadmission) or after it has connected (postadmission). The combination of agent or agentless solutions and pre- or postadmission designs is driven by security objectives and the technical capabilities of an organization's selected tool. Agent-based NAC requires installation and thus adds complexity and maintenance, but it provides greater insight and control. Agentless installations are lightweight and easier to handle for users whose machines may not be centrally managed or who have devices that may not support the NAC agent. However, agentless installations provide less detail. Preadmission NAC decisions keep potentially dangerous systems off a network; postadmission decisions can help with response and prevent failed NAC access attempts from stopping business. The Security+ exam outline doesn't include pre- and postadmission as part of NAC, but we have included it here because NAC implementations may be agent or agentless and may be pre- or postadmission. NAC and 802.1X 802.1X is a standard for authenticating devices connected to wired and wireless networks. It uses centralized authentication using EAP. 802.1X authentication requires a supplicant, typically a user or client device that wants to connect. The supplicant connects to the network and the authentication server sends a request for it to identify itself. The supplicant responds, and an authentication process commences. If the supplicant's credentials are accepted, the system is allowed access to the network, often with appropriate rules applied such as placing the supplicant in a network zone or VLAN. We discussed EAP and 802.1X in more detail in Chapter 8, “Identity and Access Management.” 802.1X is frequently used for port-based authentication or port security, which is used at the network access switch level to authorize ports or to leave them as unauthorized. Thus, if devices want to connect to a local area network, they need to have an 802.1X supplicant and complete an authentication process. Port Security and Port-Level Protections Protecting networks from devices that are connected to them requires more than just validating their security state using NAC. A number of protections focus on ensuring that the network itself is not endangered by traffic that is sent on it. Port security is a capability that allows you to limit the number of MAC addresses that can be used on a single port. This prevents a number of possible problems, including MAC (hardware) address spoofing, content-addressable memory (CAM) table overflows, and in some cases, plugging in additional network devices to extend the network. Although port security implementations vary, most port security capabilities allow you to either dynamically lock the port by setting a maximum number of MAC addresses or statically lock the port to allow only specific MAC addresses. Although this type of MAC filtering is less nuanced and provides less information than NAC does, it remains useful. Port security was originally a term used for Cisco switches, but many practitioners and vendors use it to describe similar features found on switches from other companies as well. Port security helps protect the CAM table, which maps MAC addresses to IP addresses, allowing a switch to send traffic to the correct port. If the CAM table doesn't have an entry, the switch will attempt to determine what port the address is on, broadcasting traffic to all ports if necessary. That means attackers who can fill a CAM table can make switches fail over to broadcasting traffic, making otherwise inaccessible traffic visible on their local port. Since spoofing MAC addresses is relatively easy, port security shouldn't be relied on to prevent untrusted systems from connecting. Despite this, configuring port security can help prevent attackers from easily connecting to a network if NAC is not available or not in use. It can also prevent CAM table overflow attacks that might otherwise be possible. In addition to port security, protocol-level protections are an important security capability that switches and other network devices provide. These include the following: Loop prevention focuses on detecting loops and then disabling ports to prevent the loops from causing issues. Spanning Tree Protocol (STP), using bridge protocol data units, as well as anti- loop implementations like Cisco's loopback detection capability, sends frames with a switch identifier that the switch then monitors to prevent loops. Although a loop can be as simple as a cable with both ends plugged into the same switch, loops can also result from cables plugged into different switches, firewalls that are plugged in backward, devices with several network cards plugged into different portions of the same network, and other misconfigurations found in a network. Broadcast storm prevention, sometimes called storm control, prevents broadcast packets from being amplified as they traverse a network. Preventing broadcast storms relies on several features, such as offering loop protection on ports that will be connected to user devices, enabling STP on switches to make sure that loops are detected and disabled, and rate- limiting broadcast traffic. A broadcast storm occurs when a loop in a network causes traffic amplification to occur as switches attempt to figure out where traffic should be sent. The following graphic shows a loop with three switches, A, B, and C, all connected together. Since traffic from host 1 could be coming through either switch B or switch C, when a broadcast is sent out asking where host 1 is, multiple responses will occur. When this occurs, responses will be sent from both switches B and C. As this repeats, amplification will occur, causing a storm. Bridge Protocol Data Unit (BPDU) Guard protects STP by preventing ports that should not send BPDU messages from sending them. It is typically applied to switch ports where user devices and servers will be plugged in. Ports where switches will be connected will not have BPDU Guard turned on, because they may need to send BPDU messages that provide information about ports, addresses, priorities, and costs as part of the underlying management and control of the network. Dynamic Host Configuration Protocol (DHCP) snooping focuses on preventing rogue DHCP servers from handing out IP addresses to clients in a managed network. DHCP snooping drops messages from any DHCP server that is not on a list of trusted servers, but it can also be configured with additional options such as the ability to block DHCP messages where the source MAC and the hardware MAC of a network card do not match. A final security option is to drop messages releasing or declining a DHCP offer if the release or decline does not come from the same port that the request came from, preventing attackers from causing a DHCP offer or renewal to fail. These protections may seem like obvious choices, but network administrators need to balance the administrative cost and time to implement them for each port in a network, as well as the potential for unexpected impacts if security settings affect services. Implementation choices must take into account whether attacks are likely, how controlled the network segment or access to network ports is, and how difficult it will be to manage all the network ports that must be configured and maintained. Central network management tools as well as dynamic and rules-based management capabilities can reduce the time needed for maintenance and configuration, but effort will still be required for initial setup and ongoing support. Virtual Private Networks and Remote Access A virtual private network (VPN) is a way to create a virtual network link across a public network that allows the endpoints to act as though they are on the same network. Although it is easy to think about VPNs as an encrypted tunnel, encryption is not a requirement of a VPN tunnel. There are two major VPN technologies in use in modern networks. The first, IPSec VPNs, operate at layer 3, require a client, and can operate in either tunnel or transport mode. In tunnel mode, entire packets of data sent to the other end of the VPN connection are protected. In transport mode, the IP header is not protected but the IP payload is. IPSec VPNs are often used for site-to-site VPNs and for VPNs that need to transport more than just web and application traffic. The second common VPN technology is SSL VPNs (although they actually use TLS in current implementations—the common substitution of SSL for TLS continues here). SSL VPNs can either use a portal-based approach (typically using HTML5), where users access it via a web page and then access services through that connection, or they can offer a tunnel mode like IPSec VPNs. SSL VPNs are popular because they can be used without a client installed or specific endpoint configuration that is normally required for IPSec VPNs. SSL VPNs also provide the ability to segment application access, allowing them to be more granular without additional complex configuration to create security segments using different VPN names or hosts, as most IPSec VPN tools would require. In addition to the underlying technology that is used by VPNs, there are implementation decisions that are driven by how a VPN will be used and what traffic it needs to carry. Exam Note The Security+ exam outline lists IPSec and TLS under tunneling instead of under VPNs. The underlying technology for VPNs and tunneling is the same, and many security analysts will encounter VPN choices between IPSec VPNs and TLS VPNs before they have to choose between other types of tunneling. Just remember for the purposes of the exam that you may encounter this in either place. The first decision point for many VPN implementations is whether the VPN will be used for remote access or if it will be a site-to-site VPN. Remote-access VPNs are commonly used for traveling staff and other remote workers, and site-to-site VPNs are used to create a secure network channel between two or more sites. Since site-to-site VPNs are typically used to extend an organization's network, they are frequently always-on VPNs, meaning that they are connected and available all of the time, and that if they experience a failure they will automatically attempt to reconnect. Remote-access VPNs are most frequently used in an as-needed mode, with remote workers turning on the VPN when they need to connect to specific resources or systems or when they need a trusted network connection. Tunneling The second important decision for VPN implementations is whether they will be a split-tunnel VPN or a full-tunnel VPN. A full-tunnel VPN sends all network traffic through the VPN tunnel, keeping it secure as it goes to the remote trusted network. A split-tunnel VPN only sends traffic intended for systems on the remote trusted network through the VPN tunnel. Split tunnels offer the advantage of using less bandwidth for the hosting site, since network traffic that is not intended for that network will be sent out through whatever Internet service provider the VPN user is connected to. However, that means the traffic is not protected by the VPN and cannot be monitored. A full-tunnel VPN is a great way to ensure that traffic sent through an untrusted network, such as those found at a coffee shop, hotel, or other location, remains secure. If the network you are sending traffic through cannot be trusted, a split-tunnel VPN may expose more information about your network traffic than you want it to! Exam Note NAC, 802.1X, port security, and VPN all have roles to play in how networks and network traffic are secured. For the exam, you'll need to be familiar with each of them and how they're used in secure networks. Network Appliances and Security Tools There are many different types of network appliances that you should consider as part of your network design. Special-purpose hardware devices, virtual machine and cloud-based software appliances, and hybrid models in both open source and proprietary commercial versions are used by organizations. Hardware appliances can offer the advantage of being purpose-built, allowing very high-speed traffic handling capabilities or other capabilities. Software and virtual machine appliances can be easily deployed and can be scaled based on needs, whereas cloud appliances can be dynamically created, scaled, and used as needed. Regardless of the underlying system, appliances offer vendors a way to offer an integrated system or device that can be deployed and used in known ways, providing a more controlled experience than a software package or service deployed on a customer-owned server. Hardware, Software, and Vendor Choices When you choose a network appliance, you must consider more than just the functionality. If you're deploying a device, you also need to determine whether you need or want a hardware appliance or a software appliance that runs on an existing operating system, a virtual machine, or a cloud-based service or appliance. Drivers for that decision include the environment where you're deploying it, the capabilities you need, what your existing infrastructure is, upgradability, support, and the relative cost of the options. So, deciding that you need a DNS appliance isn't as simple as picking one off a vendor website! You should also consider whether open source or proprietary commercial options are the right fit for your organization. Open source options may be less expensive or faster to acquire in organizations with procurement and licensing restrictions. Commercial offerings may offer better support, additional proprietary features, certifications and training, or other desirable options as well. When you select a network appliance, make sure you take into account how you will deploy it— hardware, software, virtual, or cloud—and whether you want an open source or proprietary solution. Jump Servers Administrators and other staff need ways to securely operate in security zones with different security levels. Jump servers, sometimes called jump boxes, are a common solution. A jump server is a secured and monitored system used to provide that access. It is typically configured with the tools required for administrative work and is frequently accessed with SSH, RDP, or other remote desktop methods. Jump boxes should be configured to create and maintain a secure audit trail, with copies maintained in a separate environment to allow for incident and issue investigations. Load Balancing Load balancers are used to distribute traffic to multiple systems, provide redundancy, and allow for ease of upgrades and patching. They are commonly used for web service infrastructures, but other types of load balancers can also be found in use throughout many networks. Load balancers typically present a virtual IP (VIP), which clients send service requests to on a service port. The load balancer then distributes those requests to servers in a pool or group. Two major modes of operation are common for load balancers: Active/active load balancer designs distribute the load among multiple systems that are online and in use at the same time. Active/passive load balancer designs bring backup or secondary systems online when an active system is removed or fails to respond properly to a health check. This type of environment is more likely to be found as part of disaster recovery or business continuity environments, and it may offer less capability from the passive system to ensure some functionality remains. Load balancers rely on a variety of scheduling or load-balancing algorithms to choose where traffic is sent to. Here are a few of the most common options: Round-robin sends each request to servers by working through a list, with each server receiving traffic in turn. Least connection sends traffic to the server with the fewest number of active connections. Agent-based adaptive balancing monitors the load and other factors that impact a server's ability to respond and updates the load balancer's traffic distribution based on the agent's reports. Source IP hashing uses a hash of the source IP to assign traffic to servers. This is essentially a randomization algorithm using client-driven input. In addition to these, weighted algorithms take into account a weighting or score. Weighted algorithms include the following: Weighted least connection uses a least connection algorithm combined with a predetermined weight value for each server. Fixed weighted relies on a preassigned weight for each server, often based on capability or capacity. Weighted response time combines the server's current response time with a weight value to assign it traffic. Finally, load balancers may need to establish persistent sessions. Persistence means that a client and a server continue to communicate throughout the duration of a session. This helps servers provide a smoother experience, with consistent information maintained about the client, rather than requiring that the entire load-balanced pool be made aware of the client's session. Of course, sticky sessions also mean that load will remain on the server that the session started with, which requires caution in case too many long- running sessions run on a single server and a load-balancing algorithm is in use that doesn't watch this. Factors such as the use of persistence, different server capabilities, or the use of scalable architectures can all drive choices for scheduling algorithms. Tracking server utilization by a method such as an agent- based adaptive balancing algorithm can be attractive but requires more infrastructure and overhead than a simple round-robin algorithm. Layer 4 vs. Layer 7 Another consideration that the Security+ exam outline requires you to know is the difference between devices operating at layer 4 and layer 7. This can be important for firewalls where layer 4 versus layer 7 inspection can make a big difference in application security. A defining trait of next-generation firewalls (NGFWs) is their ability to interact with traffic at both layer 4 and layer 7. Application awareness allows them to stop application attacks and to monitor for unexpected traffic that would normally pass through a firewall only operating at the transport layer where IP addresses, protocols, and ports are the limit of their insight. Of course, this comes with a toll—NGFWs need more CPU and memory to keep track of the complexities of application inspection, and they also need more complex rules and algorithms that can understand application traffic. Proxy Servers Proxy servers accept and forward requests, centralizing the requests and allowing actions to be taken on the requests and responses. They can filter or modify traffic and cache data, and since they centralize requests, they can be used to support access restrictions by IP address or similar requirements. There are two types of proxy servers: Forward proxies are placed between clients and servers, and they accept requests from clients and send them forward to servers. Since forward proxies conceal the original client, they can anonymize traffic or provide access to resources that might be blocked by IP address or geographic location. They are also frequently used to allow access to resources such as those that libraries subscribe to. Reverse proxies are placed between servers and clients, and they are used to help with load balancing and caching of content. Clients can thus query a single system but have traffic load spread to multiple systems or sites. Web Filters Web filters, sometimes called content filters, are centralized proxy devices or agent-based tools that allow or block traffic based on content rules. These can be as simple as conducting Uniform Resource Locator (URL) scanning and blocking specific URLs, domains, or hosts, or they may be complex, with pattern matching, IP reputation, and other elements built into the filtering rules. Like other technologies, they can be configured with allow or deny lists as well as rules that operate on the content or traffic they filter. When deployed as hardware devices or virtual machines, they are typically a centralized proxy, which means that traffic is routed through the device. In agent-based deployments, the agents are installed on devices, meaning that the proxy is decentralized and can operate wherever the device is rather than requiring a network configured to route traffic through the centralized proxy. Regardless of their design, they typically provide content categorization capabilities that are used for URL filtering with common categories, including adult material, business, child- friendly material, and similar broad topics. This is often tied to block rules that stop systems from visiting sites that are in an undesired category or that have been blocked due to reputation, threat, or other reasons. Proxies frequently have content filtering capabilities, but content filtering and URL filtering can also be part of other network devices and appliances such as firewalls, network security appliances, and IPSs. Data Protection Ensuring that data isn't extracted or inadvertently sent from a network is where a data loss prevention (DLP) solution comes into play. DLP solutions frequently pair agents on systems with filtering capabilities at the network border, email servers, and other likely exfiltration points. When an organization has concerns about sensitive, proprietary, or other data being lost or exposed, a DLP solution is a common option. DLP systems can use pattern-matching capabilities or can rely on tagging, including the use of metadata to identify data that should be flagged. Actions taken by DLP systems can include blocking traffic, sending notifications, or forcing identified data to be encrypted or otherwise securely transferred rather than being sent in an unencrypted or unsecure mode. Intrusion Detection and Intrusion Prevention Systems Network-based intrusion detection systems (IDSs) and intrusion prevention systems (IPSs) are used to detect threats and, in the case of IPSs, to block them. They rely on one or more of two different detection methods to identify unwanted and potentially malicious traffic: Signature-based detections rely on a known hash or signature matching to detect a threat. Anomaly-based detection establishes a baseline for an organization or network and then flags when out-of-the- ordinary behavior occurs. Although an IPS needs to be deployed in line where it can interact with the flow of traffic to stop threats, both IDSs and IPSs can be deployed in a passive mode as well. Passive modes can detect threats but cannot take action—which means that an IPS placed in a passive deployment is effectively an IDS. Figure 12.2 shows how these deployments can look in a simple network. FIGURE 12.2 Inline IPS vs. passive IDS deployment using a tap or SPAN port Like many of the other appliances covered in this chapter, IDS and IPS deployments can be hardware appliances, software-based, virtual machines, or cloud instances. Key decision points for selecting them include their throughput, detection methods, availability of detection rules and updates, the ability to create custom filters and detections, and their detection rates for threats in testing. Configuration Decisions: Inline vs. Tap, Active vs. Passive Two common decisions for network security devices are whether they should be inline or use a tap. Inline network devices have network traffic pass through them. This provides them the opportunity to interact with the traffic, including modifying it or stopping traffic if needed. It also creates a potential point of failure since an inline device that stops working may cause an outage. Some inline devices are equipped with features that can allow them to fail open instead of failing closed, but not all failure scenarios will activate this feature! Taps or monitors are network devices that replicate traffic for inspection. A tap provides access to a copy of network traffic while the traffic continues on. Taps don't provide the same ability to interact with traffic but also avoid the problem of potential failures of a device causing an outage. Taps are commonly used to allow for monitoring, analysis, and security purposes. Taps are divided into two types: active taps and passive taps. Active taps require power to operate and their network ports are physically separate without a direct connection between them. This means that power outages or software failures can interrupt traffic. Passive taps have a direct path between network ports. Passive optical taps simply split the light passing through them to create copies. Passive taps for copper networks do require power but have a direct path, meaning that a power outage will not interrupt traffic. The Security+ exam outline doesn't mention a common third option: a SPAN port or mirror port, which is a feature built into routers and switches that allows selected traffic to be copied to a designated port. SPAN ports work similarly to taps but may be impacted by traffic levels on the switch and are typically considered less secure because they can be impacted by switch and router vulnerabilities and configuration issues, unlike a passive tap that simply copies traffic. Firewalls Firewalls are one of the most common components in network design. They are deployed as network appliances or on individual devices, and many systems implement a simple firewall or firewall- like capabilities. There are two basic types of firewalls: Stateless firewalls (sometimes called packet filters) filter every packet based on data such as the source and destination IP and port, the protocol, and other information that can be gleaned from the packet's headers. They are the most basic type of firewall. Stateful firewalls (sometimes called dynamic packet filters) pay attention to the state of traffic between systems. They can make a decision about a conversation and allow it to continue once it has been approved rather than reviewing every packet. They track this information in a state table, and use the information they gather to allow them to see entire traffic flows instead of each packet, providing them with more context to make security decisions. Along with stateful and stateless firewalls, additional terms are used to describe some firewall technologies. Next-generation firewall (NGFW) devices are far more than simple firewalls. In fact, they might be more accurately described as all-in-one network security devices in many cases. The general term has been used to describe network security devices that include a range of capabilities such as deep packet inspection, IDS/IPS functionality, antivirus and antimalware, and other functions. Despite the overlap between NGFWs and UTM devices, NGFWs are typically faster and capable of more throughput because they are more focused devices. Unified threat management (UTM) devices frequently include firewall, IDS/IPS, antimalware, URL and email filtering and security, data loss prevention, VPN, and security monitoring and analytics capabilities. The line between UTM and NGFW devices can be confusing, and the market continues to narrow the gaps between devices as each side offers additional features. UTM devices are typically used for an “out of box” solution where they can be quickly deployed and used, often for small to mid-sized organizations. NGFWs typically require more configuration and expertise. UTM appliances are frequently deployed at network boundaries, particularly for an entire organization or division. Since they have a wide range of security functionality, they can replace several security devices while providing a single interface to manage and monitor them. They also typically provide a management capability that can handle multiple UTM devices at once, allowing organizations with several sites or divisions to deploy UTM appliances to protect each area while still managing and monitoring them centrally. Finally, web application firewalls (WAFs) are security devices that are designed to intercept, analyze, and apply rules to web traffic, including tools such as database queries, APIs, and other web application tools. In many ways, a WAF is easier to understand if you think of it as a firewall combined with an intrusion prevention system. They provide deeper inspection of the traffic sent to web servers looking for attacks and attack patterns, and then apply rules based on what they see. This allows them to block attacks in real time, or even modify traffic sent to web servers to remove potentially dangerous elements in a query or request. Regardless of the type of firewall, common elements include the use of firewall rules that determine what traffic is allowed and what traffic is stopped. While rule syntax may vary, an example firewall rule will typically include the source, including IP addresses, hostnames, or domains, ports and protocols, an allow or deny statement, and destination IP addresses, host or hosts, or domain with ports and protocols. Rules may read: ALLOW TCP port ANY from 10.0.10.0/24 to 10.1.1.68/32 to TCP port 80 This rule would allow any host in the 10.0.10.0/24 subnet to host 10.1.1.68 on port 80—a simple rule to allow a web server to work. The Security+ exam outline also calls out one other specific use of firewalls: the creation of screened subnets. A screened subnet like the example shown in Figure 12.3 uses three interfaces on a firewall. One interface is used to connect to the Internet or an untrusted network, one is used to create a secured area, and one is used to create a public area (sometimes called a DMZ). FIGURE 12.3 Screened subnet Access Control Lists Access control lists (ACLs) are rules that either permit or deny actions. For network devices, they're typically similar to firewall rules. ACLs can be simple or complex, ranging from a single statement to multiple entries that apply to traffic. Network devices may also provide more advanced forms of ACLs, including time- based, dynamic, or other ACLs that have conditions that impact their application. Cisco's IP-based ACLs use the following format: access-list access-list-number dynamic name {permit|deny} [protocol] {source source-wildcard|any} {destination destination- wildcard|any} [precedence precedence][tos tos][established] [log|log-input] [operator destination-port|destination port] That means that a sample Cisco ACL that allows access to a web server might read as follows: access-list 100 permit tcp any host 10.10.10.1 eq http ACL syntax will vary by vendor, but the basic concept of ACLs as rules that allow or deny actions, traffic, or access remains the same, allowing you to read and understand many ACLs simply by recognizing the elements of the rule statement. A sample of what an ACL might contain is shown in Table 12.1. TABLE 12.1 Example network ACLs Rule Protocol Ports Destination Allow/deny Notes number 10 TCP 22 10.0.10.0/24 ALLOW Allow SSH 20 TCP 443 10.0.10.45/32 ALLOW Inbound HTTPS to web server 30 ICMP ALL 0.0.0.0/0 DENY Block ICMP Cloud services also provide network ACLs. VPCs and other services provide firewall-like rules that can restrict or allow traffic between systems and services. Like firewall rules, these can typically be grouped, tagged, and managed using security groups or other methods. Deception and Disruption Technology A final category of network-related tools is those intended to capture information about attackers and their techniques and to disrupt ongoing attacks. Capturing information about attackers can provide defenders with useful details about their tools and processes and can help defenders build better and more secure networks based on real- world experiences. There are four major types of deception and disruption tools that are included in the Security+ exam outline. The first and most common is the use of honeypots. Honeypots are systems that are intentionally configured to appear to be vulnerable but that are actually heavily instrumented and monitored systems that will document everything an attacker does while retaining copies of every file and command they use. They appear to be legitimate and may have tempting false information available on them. Much like honeypots, honeynets are networks set up and instrumented to collect information about network attacks. In essence, a honeynet is a group of honeypots set up to be even more convincing and to provide greater detail on attacker tools due to the variety of systems and techniques required to make it through the network of systems. Unlike honeynets and honeypots, which are used for adversarial research, honeyfiles are used for intrusion detection. A honeyfile is an intentionally attractive file that contains unique, detectable data that is left in an area that an attacker is likely to visit if they succeed in their attacks. If the data contained in a honeyfile is detected leaving the network, or is later discovered outside of the network, the organization knows that the system was breached. Honeytokens are the final item in this category. Honeytokens are data that is intended to be attractive to attackers but which is used specifically to allow security professionals to track data. They may be entries in databases, files, directories, or any other data asset that can be specifically identified. IDS, IPS, DLP, and other systems are then configured to watch for honeytokens that should not be sent outside the organization or accessed under normal circumstances because they are not actual organizational data. If they are in use or being transferred, it is likely that a malicious attacker has accessed them, thinking that they have legitimate and valuable data. If you're fascinated by honeypots and honeynets, visit the Honeynet project (http://honeynet.org), an effort that gathers tools, techniques, and other materials for adversary research using honeypots and honeynets as well as other techniques. Exam Note Appliances and tools like jump servers, load balancers and load- balancing techniques, proxy servers, web filters, DLP, IDS, IPS, firewalls, and the types of firewalls commonly encountered as well as ACLs are all included in the exam outline. Make sure you know what each is used for and how it could be used to secure an organization's infrastructure. Finally, make sure you're aware of the various deception and disruption technologies like honeyfiles, honeytokens, honeynets, and honeypots. Network Security, Services, and Management Managing your network in a secure way and using the security tools and capabilities built into your network devices is another key element in designing a secure network. Whether it is prioritizing traffic via quality of service (QoS), providing route security, or implementing secure protocols on top of your existing network fabric, network devices and systems provide a multitude of options. Out-of-Band Management Access to the management interface for a network appliance or device needs to be protected so that attackers can't seize control of it and to ensure that administrators can reliably gain access when they need to. Whenever possible, network designs must include a way to do secure out-of-band management. A separate means of accessing the administrative interface should exist. Since most devices are now managed through a network connection, modern implementations use a separate management VLAN or an entirely separate physical network for administration. Physical access to administrative interfaces is another option for out-of-band management, but in most cases physical access is reserved for emergencies because traveling to the network device to plug into it and manage it via USB, serial, or other interfaces is time consuming and far less useful for administrators than a network-based management plane. DNS Domain Name System (DNS) servers and service can be an attractive target for attackers since systems rely on DNS to tell them where to send their traffic whenever they try to visit a site using a human-readable name. DNS itself isn't a secure protocol—in fact, like many of the original Internet protocols, it travels in an unencrypted, unprotected state and does not have authentication capabilities built in. Fortunately, Domain Name System Security Extensions (DNSSEC) can be used to help close some of these security gaps. DNSSEC provides authentication of DNS data, allowing DNS queries to be validated even if they are not encrypted. Properly configuring DNS servers themselves is a key component of DNS security. Preventing techniques such as zone transfers, as well as ensuring that DNS logging is turned on and that DNS requests to malicious domains are blocked, are common DNS security techniques. DNS filtering is used by many organizations to block malicious domains. DNS filtering uses a list of prohibited domains, subdomains, and hosts and replaces the correct response with an alternate DNS response, often to an internal website that notes that the access was blocked and what to do about the block. DNS filtering can be an effective response to phishing campaigns by allowing organizations to quickly enter the phishing domain into a DNS filter list and redirect users who click on links in the phishing email to a trusted warning site. Like many other security tools, DNS filters are also commonly fed through threat, reputation, and block list feeds, allowing organizations to leverage community knowledge about malicious domains and systems. Email Security The Security+ exam outline looks at three major methods of protecting email. These include DomainKeys Identified Mail (DKIM), Sender Policy Framework (SPF), and Domain-based Message Authentication Reporting and Conformance (DMARC). DKIM allows organizations to add content to messages to identify them as being from their domain. DKIM signs both the body of the message and elements of the header, helping to ensure that the message is actually from the organization it claims to be from. It adds a DKIM-Signature header, which can be checked against the public key that is stored in public DNS entries for DKIM-enabled organizations. SPF is an email authentication technique that allows organizations to publish a list of their authorized email servers. SPF records are added to the DNS information for your domain, and they specify which systems are allowed to send email from that domain. Systems not listed in SPF will be rejected. SPF records in DNS are limited to 255 characters. This can make it tricky to use SPF for organizations that have a lot of email servers or that work with multiple external senders. In fact, SPF has a number of issues you can run into—you can read more about some of them at www.mimecast.com/content/sender-policy- framework. DMARC, or Domain-based Message Authentication Reporting and Conformance, is a protocol that uses SPF and DKIM to determine whether an email message is authentic. Like SPF and DKIM, DMARC records are published in DNS, but unlike DKIM and SPF, DMARC can be used to determine whether you should accept a message from a sender. Using DMARC, you can choose to reject or quarantine messages that are not sent by a DMARC-supporting sender. You can read an overview of DMARC at http://dmarc.org/overview. If you want to see an example of a DMARC record, you can check out the DMARC information for SendGrid by using a dig command from a Linux command prompt: dig txt _dmarc.sendgrid.net. Note that it is critical to include the underscore before _dmarc. You should see something that looks like the following graphic. If you do choose to implement DMARC, you should set it up with the none flag for policies and review your data reports before going further to make sure you won't be inadvertently blocking important email. Although many major email services are already using DMARC, smaller providers and organizations may not be. In addition to email security frameworks like DKIM, SPF, and DMARC, email security devices are used by many organizations. These devices, often called email security gateways, are designed to filter both inbound and outbound email while providing a variety of security services. They typically include functions like phishing protection, email encryption, attachment sandboxing to counter malware, ransomware protection functions, URL analysis and threat feed integration, and of course support for DKIM, SPF, and DMARC checking. You may also encounter the term “secure email gateway” or “SEG” to describe email security gateways outside of the Security+ exam. Secure Sockets Layer/Transport Layer Security The ability to encrypt data as it is transferred is key to the security of many protocols. Although the first example that may come to mind is secure web traffic via HTTPS, Transport Layer Security (TLS) is in broad use to wrap protocols throughout modern systems and networks. A key concept for the Security+ exam is the use of ephemeral keys for TLS. In ephemeral Diffie–Hellman key exchanges, each connection receives a unique, temporary key. That means that even if a key is compromised, communications that occurred in the past, or in the future in a new session, will not be exposed. Ephemeral keys are used to provide perfect forward secrecy, meaning that even if the secrets used for key exchange are compromised, the communication itself will not be. What about IPv6? IPv6 still hasn't reached every network, but where it has, it can add additional complexity to many security practices and technologies. While IPv6 is supported by an increasing number of network devices, the address space and functionality it brings with it mean that security practitioners need to make sure that they understand how to monitor and secure IPv6 traffic on their network. Unlike IPv4 networks where ICMP may be blocked by default, IPv6 relies far more heavily on ICMP, meaning that habitual security practices are a bad idea. The use of NAT in IPv4 networks is also no longer needed due to the IPv6 address space, meaning that the protection that NAT provides for some systems will no longer exist. In addition, the many automatic features that IPv6 brings, including automatic tunneling, automatic configuration, the use of dual network stacks (for both IPv4 and IPv6), and the sheer number of addresses, all create complexity. All of this means that if your organization uses IPv6, you will need to make sure you understand the security implications of both IPv4 and IPv6 and what security options you have and may still need. SNMP The Simple Network Management Protocol (SNMP) protocol is used to monitor and manage network devices. SNMP objects like network switches, routers, and other devices are listed in a management information base (MIB) and are queried for SNMP information. When a device configured to use SNMP encounters an error, it sends a message known as a SNMP trap. Unlike other SNMP traffic, SNMP traps are sent to a SNMP manager from SNMP agents on devices when the device needs to notify the manager. SNMP traps include information about what occurred so that the manager can take appropriate action. The base set of SNMP traps are coldStart, warmStart, linkDown, linkUp, authenticationFailure, and egpNeighborLoss. Additional custom traps can be configured and are often created by vendors specifically for their devices to provide information. SNMP traps aren't the only way that SNMP is used to monitor and manage SNMP-enabled devices—there's a lot more to SNMP, including ways to help secure it. While you won't need to go in-depth for the Security+ exam, if you'd like to learn more about SNMP in general, you can read details about SNMP, the components involved in an SNMP management architecture, and SNMP security at www.manageengine.com/network- monitoring/what-is-snmp.html. Monitoring Services and Systems Without services, systems and networks wouldn't have much to do. Ensuring that an organization's services are online and accessible requires monitoring and reporting capabilities. Although checking to see if a service is responding can be simple, validating that the service is functioning as expected can be more complex. Organizations often use multiple tiers of service monitoring. The first and most simple validates whether a service port is open and responding. That basic functionality can help identify significant issues such as the service failing to load, crashing, or being blocked by a firewall rule. The next level of monitoring requires interaction with the service and some understanding of what a valid response should look like. These transactions require additional functionality and may also use metrics that validate performance and response times. The final level of monitoring systems looks for indicators of likely failure and uses a broad range of data to identify pending problems. Service monitoring tools are built into many operations’ monitoring tools, SIEM devices, and other organizational management platforms. Configuring service-level monitoring can provide insight into ongoing issues for security administrators, as service failures or issues can be an indicator of an incident. File Integrity Monitors The infrastructure and systems that make up a network are a target for attackers, who may change configuration files, install their own services, or otherwise modify systems that need to be trustworthy. Detecting those changes and either reporting on them or restoring them to normal is the job of a file integrity monitor. Although there are numerous products on the market that can handle file integrity monitoring, one of the oldest and best known is Tripwire, a file integrity monitoring tool with both commercial and open source versions. File integrity monitoring tools like Tripwire create a signature or fingerprint for a file, and then monitor the file and filesystem for changes to monitored files. They integrate numerous features to allow normal behaviors like patching or user interaction, but they focus on unexpected and unintended changes. A file integrity monitor can be a key element of system security design, but it can also be challenging to configure and monitor if it is not carefully set up. Since files change through a network and on systems all the time, file integrity monitors can be noisy and require time and effort to set up and maintain. Hardening Network Devices Much like the workstations and other endpoints we explored in Chapter 11, “Endpoint Security,” network devices also need to be hardened to keep them secure. Fortunately, hardening guidelines and benchmarks exist for many device operating systems and vendors allowing security practitioners to follow industry best practices to secure their devices. The Center for Internet Security (CIS) as well as manufacturers themselves provide lockdown and hardening guidelines for many common network devices like switches and routers. Another important step in hardening network devices is to protect their management console. For many organizations that means putting management ports onto an isolated VLAN that requires access using a jump server or VPN. Physical security remains critical as well. Network closets are typically secured, may be monitored, and often take advantage of electronic access mechanisms to allow tracking of who accesses secured spaces. Exam Note Securing the services that a network provides is a key element in the Security+ exam outline. That means you need to know about quality of service, DNS and email security, the use of encryption via TLS to encapsulate other protocols for secure transport, SNMP and monitoring services and systems, file integrity monitoring to track changes, and how network devices are hardened and their management interfaces are protected. Secure Protocols Networks carry traffic using a multitude of different protocols operating at different network layers. Although it is possible to protect networks by using encrypted channels for every possible system, in most cases networks do not encrypt every connection from end to end. Therefore, choosing and implementing secure protocols properly is critical to a defense-in-depth strategy. Secure protocols can help ensure that a system or network breach does not result in additional exposure of network traffic. Using Secure Protocols Secure protocols have places in many parts of your network and infrastructure. Security professionals need to be able to recommend the right protocol for each of the following scenarios: Voice and video rely on a number of common protocols. Videoconferencing tools often rely on HTTPS, but secure versions of the Session Initiation Protocol (SIP) and the Real- time Transport Protocol (RTP) exist in the form of SIPS and SRTP, which are also used to ensure that communications traffic remains secure. A secure version of the Network Time Protocol (NTP) exists and is called NTS, but NTS has not been widely adopted. Like many other protocols you will learn about in this chapter, NTS relies on TLS. Unlike other protocols, NTS does not protect the time data. Instead, it focuses on authentication to make sure that the time information is from a trusted server and has not been changed in transit. Email and web traffic relies on a number of secure options, including HTTPS, IMAPS, POPS, and security protocols like Domain-based Message Authentication Reporting and Conformance (DMARC), DomainKeys Identified Mail (DKIM), and Sender Policy Framework (SPF) as covered earlier in this chapter. File Transfer Protocol (FTP) has largely been replaced by a combination of HTTPS file transfers and SFTP or FTPS, depending on organizational preferences and needs. Directory services like LDAP can be moved to LDAPS, a secure version of LDAP. Remote access technologies—including shell access, which was once accomplished via telnet and is now almost exclusively done via SSH—can also be secured. Microsoft's RDP is encrypted by default, but other remote access tools may use other protocols, including HTTPS, to ensure that their traffic is not exposed. Domain name resolution remains a security challenge, with multiple efforts over time that have had limited impact on DNS protocol security, including DNSSEC and DNS reputation lists. Routing and switching protocol security can be complex, with protocols like Border Gateway Protocol (BGP) lacking built-in security features. Therefore, attacks such as BGP hijacking attacks and other routing attacks remain possible. Organizations cannot rely on a secure protocol in many cases and need to design around this lack. Network address allocation using DHCP does not offer a secure protocol, and network protection against DHCP attacks relies on detection and response rather than a secure protocol. Subscription services such as cloud tools and similar services frequently leverage HTTPS but may also provide other secure protocols for their specific use cases. The wide variety of possible subscriptions and types of services means that these services must be assessed individually with an architecture and design review, as well as data flow reviews all being part of best practices to secure subscription service traffic if options are available. That long list of possible security options and the notable lack of secure protocols for DHCP, NTP, and BGP mean that although secure protocols are a useful part of a security design, they are just part of that design process. As a security professional, your assessment should identify whether an appropriate secure protocol option is included, if it is in use, and if it is properly configured. Even if a secure protocol is in use, you must then assess the other layers of security in place to determine whether the design or implementation has appropriate security to meet the risks that it will face in use. Secure Protocols The Security+ exam focuses on a number of common protocols that test takers need to know how to identify and implement. As you read this section, take into account when you would recommend a switch to the secure protocol, whether both protocols might coexist in an environment, and what additional factors would need to be considered if you implemented the protocol. These factors include client configuration requirements, a switch to an alternate port, a different client software package, impacts on security tools that may not be able to directly monitor encrypted traffic, and similar concerns. As you review these protocols, pay particular attention to the nonsecure protocol, the original port and if it changes with the secure protocol, and which secure port replaces it. Organizations rely on a wide variety of services, and the original implementations for many of these services, such as file transfer, remote shell access, email retrieval, web browsing, and others, were plain-text implementations that allowed the traffic to be easily captured, analyzed, and modified. This meant that confidentiality and integrity for the traffic that these services relied on could not be ensured and has led to the implementation of secure versions of these protocols. Security analysts are frequently called on to identify insecure services and to recommend or help implement secure alternatives. Table 12.2 shows a list of common protocols and their secure replacements. Many secure protocols rely on TLS to protect traffic, whereas others implement AES encryption or use other techniques to ensure that they can protect the confidentiality and integrity of the data that they transfer. Many protocols use a different port for the secure version of the protocol, but others rely on an initial negotiation or service request to determine whether or not traffic will be secured. TABLE 12.2 Secure and unsecure protocols Unsecure Original Secure Secure port Notes protocol port protocol option(s) DNS UDP/TCP DNSSEC UDP/TCP 53 53 FTP TCP 21 FTPS TCP 21 in explicit mode Using (and 20) and 990 in implicit TLS mode (FTPS) FTP TCP 21 SFTP TCP 22 (SSH) Using (and 20) SSH HTTP TCP 80 HTTPS TCP 443 Using TLS IMAP TCP 143 IMAPS TCP 993 Using TLS LDAP UDP and LDAPS TCP 636 Using TCP 389 TLS POP3 TCP 110 POP3 TCP 995 – Secure Using POP3 TLS RTP UDP SRTP UDP 5004 16384- 32767 SNMP UDP 161 SNMPv3 UDP 161 and 162 and 162 Telnet TCP 23 SSH TCP 22 Test takers should recognize each of these: Domain Name System Security Extensions (DNSSEC) focuses on ensuring that DNS information is not modified or malicious, but it doesn't provide confidentiality like many of the other secure protocols listed here do. DNSSEC uses digital signatures, allowing systems that query a DNSSEC-equipped server to validate that the server's signature matches the DNS record. DNSSEC can also be used to build a chain of trust for IPSec keys, SSH fingerprints, and similar records. Simple Network Management Protocol, version 3 (SNMPv3) improves on previous versions of SNMP by providing authentication of message sources, message integrity validation, and confidentiality via encryption. It supports multiple security levels, but only the authPriv level uses encryption, meaning that insecure implementations of SNMPv3 are still possible. Simply using SNMPv3 does not automatically make SNMP information secure. Secure Shell (SSH) is a protocol used for remote console access to devices and is a secure alternative to telnet. SSH is also often used as a tunneling protocol or to support other uses like SFTP. SSH can use SSH keys, which are used for authentication. As with many uses of certificate or key-based authentication, a lack of a password or weak passwords as well as poor key handling can make SSH far less secure in use. Hypertext Transfer Protocol over SSL/TLS (HTTPS) relies on TLS in modern implementations but is often called SSL despite this. Like many of the protocols discussed here, the underlying HTTP protocol relies on TLS to provide security in HTTPS implementations. Secure Real-Time Protocol (SRTP) is a secure version of the Real-Time Protocol, a protocol designed to provide audio and video streams via networks. SRTP uses encryption and authentication to attempt to reduce the likelihood of successful attacks, including replay and denial-of-service attempts. RTP uses paired protocols, RTP and RTCP. RTCP is the control protocol that monitors the quality of service (QoS) and synchronization of streams, and RTCP has a secure equivalent, SRTP, as well. Secure Lightweight Directory Access Protocol (LDAPS) is a TLS-protected version of LDAP that offers confidentiality and integrity protections. Email-Related Protocols Although many organizations have moved to web-based email, email protocols like Post Office Protocol (POP) and Internet Message Access Protocol (IMAP) remain in use for mail clients. Secure protocol options that implement TLS as a protective layer exist for both, resulting in the deployment of POPS and IMAPS. Secure/Multipurpose Internet Mail Extensions (S/MIME) provides the ability to encrypt and sign MIME data, the format used for email attachments. Thus, the content and attachments for an email can be protected, while providing authentication, integrity, nonrepudiation, and confidentiality for messages sent using S/MIME. Unlike many of the other protocols discussed here, S/MIME requires a certificate for users to be able to send and receive S/MIME- protected messages. A locally generated certificate or one from a public certificate authority (CA) is needed. This requirement adds complexity for S/MIME users who want to communicate securely with other individuals, because certificate management and validation can become complex. For this reason, S/MIME is used less frequently, despite broad support by many email providers and tools. SMTP itself does not provide a secure option, although multiple efforts have occurred over time to improve SMTP security, including attempts to standardize on an SMTPS service. However, SMTPS has not entered broad usage. Now, email security efforts like DomainKeys Identified Mail (DKIM), Domain-based Message Authentication Reporting and Conformance (DMARC), and Sender Policy Framework (SPF) are all part of efforts to make email more secure and less prone to spam. Email itself continues to traverse the Internet in unencrypted form through SMTP, which makes S/MIME one of the few broadly supported options to ensure that messages are encrypted and secure. Of course, a significant portion of email is accessed via the web, effectively making HTTPS the most common secure protocol for email access. File Transfer Protocols Although file transfer via FTP is increasingly uncommon, secure versions of FTP do exist and remain in use. There are two major options: FTPS, which implements FTP using TLS, and SFTP, which leverages SSH as a channel to perform FTP-like file transfers. SFTP is frequently chosen because it can be easier to get through firewalls since it uses only the SSH port, whereas FTPS can require additional ports, depending on the configuration. IPSec IPSec (Internet Protocol Security) is more than just a single protocol. In fact, IPSec is an entire suite of security protocols used to encrypt and authenticate IP traffic. The Security+ exam outline focuses on two components of the standard: Authentication Header (AH) uses hashing and a shared secret key to ensure integrity of data and validates senders by authenticating the IP packets that are sent. AH can ensure that the IP payload and headers are protected. Encapsulating Security Payload (ESP) operates in either transport mode or tunnel mode. In tunnel mode, it provides integrity and authentication for the entire packet; in transport mode, it only protects the payload of the packet. If ESP is used with an authentication header, this can cause issues for networks that need to change IP or port information. You should be aware of a third major component, security associations (SAs), but SAs aren't included in the exam outline. Security associations provide parameters that AH and ESP require to operate. The Internet Security Association and Key Management Protocol (ISAKMP) is a framework for key exchange and authentication. It relies on protocols such as Internet Key Exchange (IKE) for implementation of that process. In essence, ISAKMP defines how to authenticate the system you want to communicate with, how to create and manage SAs, and other details necessary to secure communication. IKE is used to set up a security association using X.509 certificates. IPSec is frequently used for VPNs, where it is used in tunnel mode to create a secure network connection between two locations. Network Attacks The Security+ exam expects you to be familiar with a few network attack techniques and concepts. As you review these, think about how you would identify them, prevent them, and respond to them in a scenario where you discovered them. On-Path Attacks An on-path (sometimes also called a man-in-the-middle [MitM]) attack occurs when an attacker causes traffic that should be sent to its intended recipient to be relayed through a system or device the attacker controls. Once the attacker has traffic flowing through that system, they can eavesdrop or even alter the communications as they wish. Figure 12.4 shows how traffic flow is altered from normal after an on-path attack has succeeded. FIGURE 12.4 Communications before and after an on-path attack An on-path attack can be used to conduct SSL stripping, an attack that in modern implementations removes TLS encryption to read the contents of traffic that is intended to be sent to a trusted endpoint. A typical SSL stripping attack occurs in three phases: 1. A user sends an HTTP request for a web page. 2. The server responds with a redirect to the HTTPS version of the page. 3. The user sends an HTTPS request for the page they were redirected to, and the website loads. A SSL stripping attack uses an on-path attack when the HTTP request occurs, redirecting the rest of the communications through a system that an attacker controls, allowing the communication to be read or possibly modified. Although SSL stripping attacks can be conducted on any network, one of the most common implementations is through an open wireless network, where the attacker can control the wireless infrastructure and thus modify traffic that passes through their access point and network connection. Stopping SSL Stripping and HTTPS On-Path Attacks Protecting against SSL stripping attacks can be done in a number of ways, including configuring systems to expect certificates for sites to be issued by a known certificate authority and thus preventing certificates for alternate sites or self-signed certificates from working. Redirects to secure websites are also a popular target for attackers since unencrypted requests for the HTTP version of a site could be redirected to a site of the attacker's choosing to allow for an on-path attack. The HTTP Strict Transport Security (HSTS) security policy mechanism is intended to prevent attacks like these that rely on protocol downgrades and cookie jacking by forcing browsers to connect only via HTTPS using TLS. Unfortunately, HSTS only works after a user has visited the site at least once, allowing attackers to continue to leverage on-path attacks. Attacks like these, as well as the need to ensure user privacy, have led many websites to require HTTPS throughout the site, reducing the chances of users visiting an HTTP site that introduces the opportunity for an SSL stripping attack. Browser plug-ins like the Electronic Frontier Foundation's HTTPS Everywhere can also help ensure that requests that might have traveled via HTTP are instead sent via HTTPS automatically. A final on-path attack variant is the browser-based on-path attack (formerly man-in-the-browser [MitB or MiB]). This attack relies on a Trojan that is inserted into a user's browser. The Trojan is then able to access and modify information sent and received by the browser. Since the browser receives and decrypts information, a browser- based on-path attack can successfully bypass TLS encryption and other browser security features, and it can also access sites with open sessions or that the browser is authenticated to, allowing a browser- based on-path attack to be a very powerful option for an attacker. Since browser-based on-path attacks require a Trojan to be installed, either as a browser plug-in or a proxy, system-level security defenses like antimalware tools and system configuration management and monitoring capabilities are best suited to preventing them. On-path attack indicators are typically changed network gateways or routes, although sophisticated attackers might also compromise network switches or routers to gain access to and redirect traffic. Domain Name System Attacks Stripping away encryption isn't the only type of network attack that can provide malicious actors with visibility into your traffic. In fact, simply having traffic sent to a system that they control is much simpler if they can manage it! That's where DNS attacks come into play. Domain hijacking changes the registration of a domain, either through technical means like a vulnerability with a domain registrar or control of a system belonging to an authorized user, or through nontechnical means such as social engineering. The end result of domain hijacking is that the domain's settings and configuration can be changed by an attacker, allowing them to intercept traffic, send and receive email, or otherwise take action while appearing to be the legitimate domain holder. Domain hijacking isn't the only way that domains can be acquired for malicious purposes. In fact, many domains end up in hands other than those of the intended owner because they are not properly renewed. Detecting domain hijacking can be difficult if you are simply a user of systems and services from the domain, but domain name owners can leverage security tools and features provided by domain registrars to both protect and monitor their domains. DNS poisoning can be accomplished in multiple ways. One form is another form of the on-path attack where an attacker provides a DNS response while pretending to be an authoritative DNS server. Vulnerabilities in DNS protocols or implementations can also permit DNS poisoning, but they are rarer. DNS poisoning can also involve poisoning the DNS cache on systems. Once a malicious DNS entry is in a system's cache, it will continue to use that information until the cache is purged or updated. This means that DNS poisoning can have a longer-term impact, even if it is discovered and blocked by an IPS or other security device. DNS cache poisoning may be noticed by users or may be detected by network defenses like an IDS or IPS, but it can be difficult to detect if done well. DNSSEC can help prevent DNS poisoning and other DNS attacks by validating both the origin of DNS information and ensuring that the DNS responses have not been modified. You can read more about DNSSEC at www.icann.org/resources/pages/dnssec-what-is-it-why- important-2019-03-05-en. When domain hijacking isn't possible and DNS cannot be poisoned, another option for attackers is URL redirection. URL redirection can take many forms, depending on the vulnerability that attackers leverage, but one of the most common is to insert alternate IP addresses into a system's hosts file. The hosts file is checked when a system looks up a site via DNS and will be used first, making a modified hosts file a powerful tool for attackers who can change it. Modified hosts files can be manually checked, or they can be monitored by system security antimalware tools that know the hosts file is a common target. In most organizations, the hosts file for the majority of machines will never be modified from its default, making changes easy to spot. Although DNS attacks can provide malicious actors with a way to attack your organization, you can also leverage DNS-related inform

Use Quizgecko on...
Browser
Browser