Chapter 11 - Endpoint Security.pdf
Document Details
Tags
Full Transcript
Chapter 11 Endpoint Security THE COMPTIA SECURITY+ EXAM OBJECTIVES COVERED IN THIS CHAPTER INCLUDE: Domain 1.0: General Security Concepts 1.4. Explain the importance of using appropriate cryptographic solutions. Tools (Trusted Platform Module (TPM), Hardware security module...
Chapter 11 Endpoint Security THE COMPTIA SECURITY+ EXAM OBJECTIVES COVERED IN THIS CHAPTER INCLUDE: Domain 1.0: General Security Concepts 1.4. Explain the importance of using appropriate cryptographic solutions. Tools (Trusted Platform Module (TPM), Hardware security module (HSM), Key management system, Secure enclave) Domain 2.0: Threats, Vulnerabilities, and Mitigations 2.3. Explain various types of vulnerabilities. Operating system (OS)-based Hardware (Firmware, End-of-life, Legacy) Misconfiguration 2.5. Explain the purpose of mitigation techniques used to secure the enterprise. Patching Encryption Configuration enforcement Decommissioning Hardening techniques (Encryption, Installation of endpoint protection, Host-based firewall, Host-based intrusion prevention system (HIPS), Disabling ports/protocols, Default password changes, Removal of unnecessary software) Domain 3.0: Security Architecture 3.1. Compare and contrast security implications of different architecture models. Architecture and infrastructure concepts (IoT, Industrial control systems (ICS)/supervisory control and data acquisition (SCADA), Real-time operating system (RTOS), Embedded systems) Domain 4.0: Security Operations 4.1. Given a scenario apply common security techniques to computing resources. Secure baselines (Establish, Deploy, Maintain) Hardening targets (Workstations, Servers, ICS/SCADA, Embedded systems, RTOS, IoT devices) 4.2. Explain the security implications of proper hardware, software, and data asset management. Acquisition/procurement process Assignment/accounting (Ownership, Classification) Monitoring/asset tracking (Inventory, Enumeration) Disposal/decommissioning (Sanitization, Destruction, Certification, Data retention) 4.4. Explain security alerting and monitoring concepts and tools. Tools (Antivirus, Data loss prevention (DLP)) 4.5. Given a scenario, modify enterprise capabilities to enhance security. Operating system security (Group Policy, SELinux) Endpoint detection and response (EDR)/extended detection and response (XDR) Protecting endpoints in your organization is a major portion of the daily tasks for many security professionals. For most organizations, endpoints significantly outnumber the servers and network devices, and since end users control or use them, they also have a wide variety of threats that they face that a server is unlikely to deal with. In this chapter, you will start by learning about operating system and hardware vulnerabilities to provide context for how and why endpoints, servers, and devices need protection. Next, you'll explore endpoint protection techniques, including how to secure a system's boot process, how modern endpoint detection and response tools are used to catch malware before it can take over systems, and how antimalware and antivirus tools detect, prevent, and remediate malware infections. You'll also learn about concepts like data loss prevention, network defense technologies, and what system and service hardening involves. Operating system security, configuration standards, and what to do with disks and removable media when they're being reused or removed from service are also part of the hardening and system protection process. The next portion of the chapter focuses on embedded and specialized systems, which are common endpoint devices that have different security requirements than traditional desktop and mobile operating systems. You'll learn about a few embedded platforms and the real-time operating systems and hardware that help drive their specialized requirements, as well as systems that they are integrated into, such as SCADA and ICS systems used for industrial automation and management. You'll explore the Internet of Things and what implications it and other specialized systems have for security. Finally, you'll finish the chapter with an exploration of the asset and data management in the context of security. You'll review procurement, asset tracking and accounting, and how asset inventories play a role in organizational security operations. Operating System Vulnerabilities One of the key elements in security operations is properly securing operating systems. Whether they're workstations, mobile devices, servers, or another type of device, the underlying operating system has a big impact on your ability to secure your organization. There are a number of ways that operating systems can be vulnerable: Vulnerabilities in the operating system itself can be exploited by attackers. This drives ongoing operating system patching as well as configuring systems to minimize their attack footprint, or the number of services that are exposed and can thus potentially be targeted. Defaults like default passwords and insecure settings are potential paths for attackers as well. As you read further in this chapter, you'll explore configuration baselines and security practices intended to avoid insecure defaults. Configurations can also introduce vulnerabilities. Unlike defaults, configurations are intentional but may be insecure. That means that security tools that support concepts like mandatory access control that limit the potential for configuration issues are useful security controls. Misconfiguration, unlike configuration and defaults, occurs when a mistake is made. Human error remains a consistent way for attackers to successfully overcome default operating system and application security. As you read further in this chapter, you'll explore many of the most common ways that organizations attempt to address these issues for operating systems, services, and software. Exam Note The Security+ exam outline is vague about operating system–based vulnerabilities —unlike other topics, it just lists “OS-based.” As you're studying, you'll want to think about how your choice of operating system, its defaults, security configuration, and support model all impact your organization's security. Hardware Vulnerabilities Endpoints also face hardware vulnerabilities that require consideration in security designs. While many hardware vulnerabilities are challenging or even impossible to deal with directly, compensating controls can be leveraged to help ensure that the impact of hardware vulnerabilities are limited. Exam Note The Security+ exam outline notes that test takers should be able to explain various types of vulnerabilities, including hardware vulnerabilities related to firmware, end- of-life hardware, and legacy hardware. Firmware is the embedded software that allows devices to function. It is tightly connected to the hardware of a device and may or may not be possible to update depending on the design and implementation of the device's hardware. Computer and mobile device firmware is typically capable of being updated but may require manual updates rather than being part of an automated update process. Firmware attacks may occur through any path that allows access to the firmware. That includes through executable updates, user downloads of malicious firmware that appears to be legitimate, and even remote network-enabled updates for devices that provide networked access to their firmware or management tools. Since firmware is embedded in the device, firmware vulnerabilities are a particular concern because reinstalling an operating system or other software will not remove malicious firmware. An example of this is 2022's MoonBounce malware, which is remotely installable and targets a computer's Serial Peripheral Interface (SPI) flash memory. Once there, it will persist through reboots and reinstallations of the operating system. You can read more about MoonBounce at www.tomshardware.com/news/moonbounce-malware-hides-in-your-bios-chip-persists- after-drive-formats. PC UEFI attacks aren't the only firmware attacks in the wild, however. News broke in 2023 of millions of infected Android devices that were shipped with malware in their firmware, meaning that purchasers couldn't trust their brand-new devices! This means that firmware validation remains an important tool for security practitioners. The techniques we'll discuss later in this chapter, like trusted boot, are increasingly critical security controls. End-of-life or legacy hardware drives concerns around lack of support. Once a device or system has reached end-of-life, they typically will also reach the end of their support from the manufacturer. Without further updates such as security fixes, you will be unable to address security problems directly and will need compensating controls, which may or may not be usable and appropriate for customer organizations. Vendors use a number of terms to describe their sales and support life cycles. Common terms and definitions include the following, but each vendor may have their own interpretation of what they mean! End of sales: The last date at which a specific model or device will be sold, although devices often remain in the supply chain through resellers for a period of time. End of life: While the equipment or device is no longer sold, it remains supported. End-of-life equipment should typically be on a path to retirement, but it has some usable lifespan left. End of support: The last date on which the vendor will provide support and/or updates. Legacy: This term is less well defined but typically is used to describe hardware, software, or devices that are unsupported. Protecting Endpoints As a security professional, you'll be asked to recommend, implement, manage, or assess security solutions intended to protect desktops, mobile devices, servers, and a variety of other systems found in organizations. These devices are often called endpoints, meaning they're an endpoint of a network, whether that is a wired or a wireless network. With such a broad range of potential endpoints, the menu of security options is also very broad. As a security practitioner, you need to know what options exist, where and how they are commonly deployed, and what considerations you need to take into account. Preserving Boot Integrity Keeping an endpoint secure while it is running starts as it boots up. If untrusted or malicious components are inserted into the boot process, the system cannot be trusted. Security practitioners in high-security environments need a means of ensuring that the entire boot process is provably secure. Fortunately, modern Unified Extensible Firmware Interface (UEFI) firmware (the replacement for the traditional Basic Input/Output System [BIOS]) can leverage two different techniques to ensure that the system is secure. Secure boot ensures that the system boots using only software that the original equipment manufacturer (OEM) trusts. To perform a secure boot operation, the system must have a signature database listing the secure signatures of trusted software and firmware for the boot process (see Figure 11.1). FIGURE 11.1 UEFI Secure boot high-level process The second security feature intended to help prevent boot-level malware is measured boot. These boot processes measure each component, starting with the firmware and ending with the boot start drivers. Measured boot does not validate against a known good list of signatures before booting; instead, it relies on the UEFI firmware to hash the firmware, bootloader, drivers, and anything else that is part of the boot process. The data gathered is stored in the Trusted Platform Module (TPM), and the logs can be validated remotely to let security administrators know the boot state of the system. This boot attestation process allows comparison against known good states, and administrators can take action if the measured boot shows a difference from the accepted or secure known state. This process allows the remote server to make decisions about the state of the system based on the information it provides, allowing access control and quarantine options. You can read more about Microsoft's Windows 10/11 implementation of the secure boot process at http://docs.microsoft.com/en- us/windows/security/information-protection/secure-the-windows-10-boot- process#. For a deeper dive into UEFI and TPM, read the Infosec Institute's write- up at http://resources.infosecinstitute.com/uefi-and-tpm. Windows has a Trusted Boot process that allows the operating system to check the integrity of the components involved in the startup process. In both cases, boot integrity begins with the hardware root of trust. The hardware root of trust for a system contains the cryptographic keys that secure the boot process. This means that the system or device inherently trusts the hardware root of trust and that it needs to be secure. One common implementation of a hardware root of trust is the TPM chip built into many computers. TPM chips are frequently used to provide built-in encryption, and they provide three major functions, which you may remember from Chapter 1, “Today's Security Professional”: Remote attestation, allowing hardware and software configurations to be verified Binding, which encrypts data Sealing, which encrypts data and sets requirements for the state of the TPM chip before decryption TPM chips are one common solution; others include serial numbers that cannot be modified or cloned, and physically unclonable functions (PUFs), which are unique to the specific hardware device that provide a unique identifier or digital fingerprint for the device. A physically unclonable function is based on the unique features of a microprocessor that are created when it is manufactured and is not intentionally created or replicated. Similar techniques are used for Apple's Secure Enclave, a dedicated secure element that is built into Apple's system on chip (SoC) modules. They provide hardware key management, which is isolated from the main CPU, protecting keys throughout their life cycle and usage. Other vendors have similar capabilities, including Google's Titan M and Samsung's TrustZone and Knox functionality. Protecting the Keys to the Kingdom A related technology is hardware security modules (HSMs). Hardware security modules are typically external devices or plug-in cards used to create, store, and manage digital keys for cryptographic functions and authentication, as well as to offload cryptographic processing. HSMs are often used in high-security environments and are normally certified to meet standards like Federal Information Processing Standards (FIPS) 140 or Common Criteria (ISO/IEC 15408). Cryptographic key management systems are used to store keys and certificates as well as to manage them centrally. This allows organizations to effectively control and manage their secrets while also enforcing policies. Cloud providers frequently provide KMS as a service for their environments as part of their offerings. Exam Note As you prepare for the exam, keep in mind the roles that TPMs, HSMs, key management services (KMSs) and secure enclaves play in system security and organization operations. Remember that TPMs are used for system security; HSMs are used to create, store, and manage keys for multiple systems; and a KMS is a service used to manage secrets. Endpoint Security Tools Once a system is running, ensuring that the system itself is secure is a complex task. Many types of security tools are available for endpoint systems, and the continuously evolving market for solutions means that traditional tool categories are often blurry. Despite that, a few common concepts and categories exist that are useful to help describe capabilities and types of tools. Antivirus and Antimalware One of the most common security tools is antivirus and antimalware software. Although more advanced antidetection, obfuscation, and other defensive tools are always appearing in new malware packages, using antimalware packages in enterprise environments remains a useful defensive layer in many situations. For ease of reference, we will refer to the broad category of antivirus and antimalware tools as antimalware tools. Tools like these work to detect malicious software and applications through a variety of means. Here are the most common methods: Signature-based detection, which uses a hash or pattern-based signature detection method to identify files or components of the malware that have been previously observed. Traditional antimalware tools often relied on signature-based detection as the first line of defense for systems, but attackers have increasingly used methods like polymorphism that change the malware every time it is installed, as well as encryption and packing to make signatures less useful. Heuristic-, or behavior-based detection, looks at what actions the malicious software takes and matches them to profiles of unwanted activities. Heuristic-based detection systems can identify new malware based on what it is doing, rather than just looking for a match to a known fingerprint. Artificial intelligence (AI) and machine learning (ML) systems are increasingly common throughout the security tools space. They leverage large amounts of data to find ways to identify malware that may include heuristic, signature, and other detection capabilities. Sandboxing is used by some tools and by the antimalware vendors themselves to isolate and run sample malicious code. A sandbox is a protected environment where unknown, untrusted, potentially dangerous, or known malicious code can be run to observe it. Sandboxes are instrumented to allow all the actions taken by the software to be documented, providing the ability to perform in-depth analysis. Playing In a Sandbox The term sandbox describes an isolated environment where potentially dangerous or problematic software can be run. Major antimalware sites, tools, and vendors all use sandboxing techniques to test potential malware samples. Some use multiple antimalware tools running in virtual environments to validate samples, and others use highly instrumented sandboxes to track every action taken by malware once it is run. Of course, there is a constant battle between malware authors and antimalware companies as malware creators look for ways to detect sandboxes so that they can prevent their tools from being analyzed and as antimalware companies develop new tools to defeat those techniques. Commercial and open source sandbox technologies are available, including Cuckoo sandbox, an automated malware analysis tool. You can read more about Cuckoo at http://cuckoosandbox.org. You'll also find sandboxing capabilities built into advanced antimalware tools, so you may already have sandboxing available to your organization. Antimalware tools can be installed on mobile devices, desktops, and other endpoints like the devices and systems that handle network traffic and email, and anywhere else that malicious software could attack or be detected. Using an antimalware package has been a consistent recommendation from security professionals for years since it is a last line of defense against systems being infected or compromised. That also means that attackers have focused on bypassing and defeating antimalware programs, including using the same tools as part of their testing for new malicious software. As you consider deploying antimalware tools, it is important to keep a few key decisions in mind. First, you need to determine what threats you are likely to face and where they are likely to be encountered. In many organizations, a majority of malicious software threats are encountered on individual workstations and laptops, or are sent and received via email. Antimalware product deployments are thus focused on those two areas. Second, management, deployment, and monitoring for tools is critical in an enterprise environment. Antimalware tools that allow central visibility and reporting integrate with other security tools for an easy view of the state of your systems and devices. Third, the detection capabilities you deploy and the overall likelihood of your antimalware product to detect, stop, and remove malicious software plays a major role in decision processes. Since malware is a constantly evolving threat, many organizations choose to deploy more than one antimalware technology, hoping to increase the likelihood of detection. Allow Lists and Deny Lists One way to prevent malicious software from being used on systems is to control the applications that can be installed or used. That's where the use of allow list and deny or block list tools come in. Allow list tools allow you to build a list of software, applications, and other system components that are allowed to exist and run on a system. If they are not on the list, they will be removed or disabled, or they will not be able to be installed. Block lists, or deny lists, are lists of software or applications that cannot be installed or run, rather than a list of what is allowed. The choice between the solutions depends on what administrators and security professionals want to accomplish. If a system requires extremely high levels of security, an allow list will provide greater security than a block or deny list, but if specific programs are considered undesirable, a block list is less likely to interfere with unknown or new applications or programs. Although these tools may sound appealing, they do not see widespread deployment in most organizations because of the effort required to maintain the lists. Limited deployments and specific uses are more common throughout organizations, and other versions of allow and block lists are implemented in the form of firewalls and similar protective technologies. Exam Note The Security+ exam uses the terms allow list and deny list or block list. You may also still encounter the terms whitelist used to describe allow lists and blacklist used to describe deny or block lists, since these terms have been in broad use, and changing terminology across the industry will take some time. Endpoint Detection and Response and Extended Detection and Response When antimalware tools are not sufficient, endpoint detection and response (EDR) tools can be deployed. EDR tools combine monitoring capabilities on endpoint devices and systems using a client or software agent with network monitoring and log analysis capabilities to collect, correlate, and analyze events. Key features of EDR systems are the ability to search and explore the collected data and to use it for investigations as well as the ability to detect suspicious data. With the continued growth of security analytics tools, EDR systems tend to look for anomalies and indicators of compromise (IoCs) using automated rules and detection engines as well as allowing manual investigation. The power of an EDR system comes in its ability to make this detection and reporting capability accessible and useful for organizations that are dealing with very large quantities of security data. If you are considering an EDR deployment, you will want to pay attention to organizational needs like the ability to respond to incidents; the ability to handle threats from a variety of sources, ranging from malware to data breaches; and the ability to filter and review large quantities of endpoint data in the broader context of your organizations. In addition to EDR, extended detection and response (XDR) tools are increasingly commonly deployed by organizations. XDR is similar to EDR but has a broader perspective considering not only endpoints but the full breadth of an organization's technology stack, including cloud services, security services and platforms, email, and similar components. They ingest logs and other information from the broad range of components, then use detection algorithms as well as artificial intelligence and machine learning to analyze the data to find issues and help security staff respond to them. Data Loss Prevention Protecting organizational data from both theft and inadvertent exposure drives the use of data loss prevention (DLP) tools like the tools and systems introduced in Chapter 1, “Today’s Security Professional.” DLP tools may be deployed to endpoints in the form of clients or applications. These tools also commonly have network and server-resident components to ensure that data is managed throughout its life cycle and various handling processes. Key elements of DLP systems include the ability to classify data so that organizations know which data should be protected; data labeling or tagging functions, to support classification and management practices; policy management and enforcement functions used to manage data to the standards set by the organization; and monitoring and reporting capabilities, to quickly notify administrators or security practitioners about issues or potential problems. Some DLP systems also provide additional functions that encrypt data when it is sent outside of protected zones. In addition, they may include capabilities intended to allow sharing of data without creating potential exposure, either tokenizing, wiping, or otherwise modifying data to permit its use without violation of the data policies set in the DLP environment. Mapping your organization's data, and then applying appropriate controls based on a data classification system or policy, is critical to success with a DLP system. Much like antimalware and EDR systems, some DLP systems also track user behaviors to identify questionable behavior or common mistakes like assigning overly broad permissions on a file or sharing a sensitive document via email or a cloud file service. Of course, even with DLP in place there are likely to be ways around the system. Taking a picture of a screen, cutting and pasting, or printing sensitive data can all allow malicious or careless users to extract data from an environment even if effective and well-managed data loss prevention tools are in place. Thus, DLP systems are part of a layered security infrastructure that combines technical and administrative solutions such as data loss prevention with policies, awareness, and other controls based on the risks that the organization and its data face. Exam Note The Security+ exam outline leans into detection tools via EDR and its broad-based relative, XDR. EDR or XDR is increasingly found in most organizations due to the significant threats posed by ransomware and other malicious software that exceed the capabilities of traditional antivirus to counter. You'll also need to know about data loss prevention (DLP) as a tool to ensure that data doesn't leave the organization. Network Defenses Protecting endpoints from network attacks can be done with a host-based firewall that can stop unwanted traffic. Host-based firewalls are built into most modern operating systems and are typically enabled by default. Of course, host-based firewalls don't provide much insight into the traffic they are filtering since they often simply block or allow specific applications, services, ports, or protocols. More advanced filtering requires greater insight into what the traffic being analyzed is, and that's where a host intrusion prevention or intrusion detection system comes in. A host-based intrusion prevention system (HIPS) analyzes traffic before services or applications on the host process it. A HIPS can take action on that traffic, including filtering out malicious traffic or blocking specific elements of the data that is received. A HIPS will look at traffic that is split across multiple packets or throughout an entire series of communications, allowing it to catch malicious activity that may be spread out or complex. Since a HIPS can actively block traffic, misidentification of traffic as malicious, misconfiguration, or other issues can cause legitimate traffic to be blocked, potentially causing an outage. If you choose to deploy a HIPS or any other tool that can actively block traffic, you need to consider what would happen if something did go wrong. When the HIPS Blocks Legitimate Traffic The authors of this book encountered exactly the situation described here after a HIPS tool was deployed to Windows systems in a datacenter. The HIPS had a module that analyzed Microsoft-specific protocols looking for potential attacks, and a Windows update introduced new flags and other elements to the protocol used by the Windows systems as part of their normal communication. Since the HIPS wasn't updated to know about those changes, it blocked the backend traffic between Windows systems in the domain. Unfortunately, that meant that almost the entire Windows domain went offline since the HIPS blocked much of the communication it relied on. The issue left the systems administrators with a negative feeling for the HIPS, and it was removed from service. It took years until the organization was comfortable deploying a HIPS-style security tool in the datacenter again. A host-based intrusion detection system (HIDS) performs similar functions, but, like a network-based intrusion detection system (IDS), it cannot take action to block traffic. Instead, a HIDS can only report and alert on issues. Therefore, a HIDS has limited use for real-time security, but it has a much lower likelihood of causing issues. Before deploying and managing host-based firewalls, HIPSs, or HIDSs, determine how you will manage them, how complex the individual configurations will be, and what would happen if the host-based protections had problems. That doesn't mean you shouldn't deploy them! Granular controls are an important part of a zero-trust design, and armoring hosts helps ensure that a compromised system behind a security boundary does not result in broader issues for organizations. Figure 11.2 shows typical placement of a host-based firewall, a HIPS, and a HIDS, as well as where a network firewall or IPS/IDS device might be placed. Note that traffic can move from system to system behind the network security devices without those devices seeing it due to the network switch that the traffic flows through. In organizational networks, these security boundaries may be for very large network segments, with hundreds or even thousands of systems that could potentially communicate without being filtered by a network security device. FIGURE 11.2 Host firewalls and IPS systems vs. network firewalls and IPS systems Hardening Techniques Ensuring that a system has booted securely is just the first step in keeping it secure. Hardening endpoints and other systems relies on a variety of techniques that protect the system and the software that runs on it. Hardening Hardening a system or application involves changing settings on the system to increase its overall level of security and reduce its vulnerability to attack. The concept of a system's attack surface, or the places where it could be attacked, is important when performing system hardening. Hardening tools and scripts are a common way to perform basic hardening processes on systems, and organizations like the Center for Internet Security (CIS), found at www.cisecurity.org/cis-benchmarks, and the National Institute of Standards and Technology, found at https://ncp.nist.gov/repository, provide hardening guides for operating systems, browsers, and a variety of other hardening targets. Exam Note The Security+ exam outline lists encryption, installing endpoint protection, host- based firewalls and host-based intrusion prevention systems, disabling ports and protocols, changing default passwords, and removing unnecessary software as key hardening items you should be able to explain. Service Hardening One of the fastest ways to decrease the attack surface of a system is to reduce the number of open ports and services that it provides by disabling ports and protocols. After all, if attackers cannot connect to the system remotely, they'll have a much harder time exploiting the system directly. Port scanners are commonly used to quickly assess which ports are open on systems on a network, allowing security practitioners to identify and prioritize hardening targets. The easy rule of thumb for hardening is that only services and ports that must be available to provide necessary services should be open, and that those ports and services should be limited to only the networks or systems that they need to interact with. Unfortunately for many servers, this may mean that the systems need to be open to the world. Table 11.1 lists some of the most common ports and services that are open for both Linux and Windows systems. TABLE 11.1 Common ports and services Port and protocol Windows Linux 22/TCP—Secure Shell (SSH) Uncommon Common Common Common 53/TCP and UDP—DNS (servers) (servers) Common Common 80/TCP—HTTP (servers) (servers) 125-139/TCP and UDP—NetBIOS Common Occasional Common Common 389/TCP and UDP—LDAP (servers) (servers) Common Common 443/TCP—HTTPS (servers) (servers) 3389/TCP and UDP—Remote Desktop Common Uncommon Protocol We talked more about services and ports in Chapter 5, “Security Assessment and Testing.” Make sure you can identify common services by their ports and understand the basics concepts behind services, ports, and protocols. Although blocking a service using a firewall is a viable option, the best option for unneeded services is to disable them entirely. In Windows you can use the Services.msc console shown in Figure 11.3 to disable or enable services. Note that here, the Remote Desktop Services is set to start manually, meaning that it will not be accessible unless the system’s user starts it. FIGURE 11.3 Services.msc showing Remote Desktop Services set to manual Starting and stopping services in Linux requires knowing how your Linux distribution handles services. For an Ubuntu Linux system, checking which services are running can be accomplished by using the service--status-all command. Starting and stopping services can be done in a number of ways, but the service command is an easy method. Issuing the sudo service [service name] stop or start command will start or stop a service simply by using the information provided by the service--status-all command to identify the service you want to shut down. Permanently stopping services, however, will require you to make other changes. For Ubuntu, the update-rc.d script is called, whereas RedHat systems use chkconfig. Exam Note Fortunately, for the Security+ exam you shouldn't need to know OS-specific commands. Instead, you should understand the concept of disabling services to reduce the attack surface of systems and that ongoing review and maintenance is required to ensure that new services and applications do not appear over time. Network Hardening A common technique used in hardening networks is the use of VLANs (virtual local area networks) to segment different trust levels, user groups, or systems. Placing IoT devices on a separate, protected VLAN with appropriate access controls or dedicated network security protections can help to ensure that frequently vulnerable devices are more protected. Using a VLAN for guest networks or to isolate VoIP phones from workstations is also a common practice. Default Passwords Changing default passwords is a common hardening practice and should be a default practice for any organization. Since default passwords, which you first encountered in Chapter 6, “Application Security,” are typically documented and publicly available, any use of default passwords creates significant risk for organizations. There are databases of default passwords easily available, including the version provided by CIRT.net: https://cirt.net/passwords. Vulnerability scanners will frequently note default passwords that they discover as part of their scanning process, but detecting them this way means something else has already gone wrong! Default passwords shouldn't be left on systems and devices. Removing Unnecessary Software Another key practice in hardening efforts for many systems and devices is removing unnecessary software. While disabling services, ports, and protocols can be helpful, removing software that isn't needed removes the potential for a disabled tool to be reenabled. It also reduces the amount of patching and monitoring that will be required for the system. Many systems arrive with unwanted and unneeded software preinstalled. Organizations often build their own system images and reinstall a fresh operating system image without unwanted software installed to simplify the process while also allowing them to deploy the software that they do want and need for business purposes. Cell phones and other mobile devices suffer from the same issues, particularly with vendor-supplied tools. Mobile device management platforms can help with this, but personally owned devices remain challenging to address. Operating System Hardening Hardening operating systems relies on changing settings to match the desired security stance for a given system. Popular benchmarks and configuration standards can be used as a base and modified to an organization's needs, allowing tools like the Center for Internet Security (CIS) benchmarks to be used throughout an organization. Fortunately, tools and scripts exist that can help make applying those settings much easier. Examples of the type of configuration settings recommended by the CIS benchmarks for Windows include the following: Setting the password history to remember 24 or more passwords Setting maximum passwords age to “365 or fewer days, but not 0,” preventing users from simply changing their passwords 24 times to get back to the same password while requiring password changes every year Setting the minimum password length to 14 or more characters Requiring password complexity Disabling the storage of passwords using reversible encryption Figure 11.4 shows how these settings can be set locally using the Local Security Policy. In addition to local settings, these can also be set with Group Policy. This list is a single section in a document with over 1,300 pages of information about how to lock down Windows, which includes descriptions of every setting, the rationale for the recommendation, what the impact of the recommended setting is, and information on how to both set and audit the setting. The good news is that you don't need to know all the details of how to harden Windows, Linux, or macOS for the Security+ exam. What you do need to know is that operating system hardening uses system settings to reduce the attack surface for your operating system; that tools and standards exist to help with that process; and that assessing, auditing, and maintaining OS hardening for your organization is part of the overall security management process. FIGURE 11.4 Windows Local Security Policy Hardening the Windows Registry The Windows Registry is the core of how Windows tracks what is going on. The Registry is thus an important target for attackers, who can use it to automatically start programs, gather information, or otherwise take malicious action on a target machine. Hardening the Windows Registry involves configuring permissions for the Registry, disallowing remote Registry access if it isn't required for a specific need, and limiting access to Registry tools like regedit so that attackers who do gain access to a system will be less likely to be able to change or view the Registry. If you want to experiment with Registry modifications or you're actually making these changes to production systems, you'll likely want to back up the Registry first in case something goes wrong. You can read more about that at https://support.microsoft.com/en-gb/topic/how-to-back-up-and-restore-the- registry-in-windows-855140ad-e318-2a13-2829-d428a2ab0692#ID0EBD=Windows_11. Windows Group Policy and Hardening Windows Group Policy provides Windows systems and domains with the ability to control settings through Group Policy Objects (GPOs). GPOs can define a wide range of options from disabling guest accounts and setting password minimum lengths to restricting software installations. GPOs can be applied locally or via Active Directory, allowing for large-scale managed deployments of GPOs. Microsoft provides the Security Compliance Toolkit (SCT), which is a set of tools that work with Microsoft's security configuration baselines for Windows and other Microsoft applications. The toolkit can compare deployed GPOs to the baseline as well as allow editing and deployment through Active Directory or as a local policy. You can read more about the Security Compliance Toolkit at https://learn.microsoft.com/en- us/windows/security/threat-protection/windows-security-configuration- framework/security-compliance-toolkit-10. Figure 11.5 shows Policy Analyzer, part of the SCT run against a workstation using the Microsoft baseline. FIGURE 11.5 Policy Analyzer using Microsoft's baseline against a default Windows system Hardening Linux: SELinux The Security+ exam outline specifically calls out one Linux hardening option, SELinux, or Security-Enhanced Linux. SELinux is a Linux kernel-based security module that provides additional security capabilities and options on top of existing Linux distributions. Among those capabilities are mandatory access control (MAC) that can be enforced at the user, file, system service, and network layer as part of its support for least privilege-based security implementations. SELinux enforces user rights based on a username, role, and type or domain for each entity. Similarly, files and other resources like network ports are labeled using a name, role, and type defined by policies that describe permissions and requirements to use the resource. SELinux has been implemented for multiple Linux distributions and has been implemented in Android, where it is frequently used in both mobile and embedded device applications. If you're investigating how to harden Linux environments, you'll also likely run into AppArmor, which also implements mandatory access controls for Linux. AppArmor is broadly supported by popular Linux distributions. You can read more about it at https://apparmor.net. Configuration, Standards, and Schemas To harden systems in an enterprise environment, you'll need to manage the configuration of systems through your organization. In fact, configuration management tools are one of the most powerful options security professionals and system administrators have to ensure that the multitude of systems in their organizations have the right security settings and to help keep them safe. A third-party configuration management system like Jamf Pro for macOS, a vendor-supplied tool like Configuration Manager for Windows, or even open source configuration management tools like CFEngine help enforce standards, manage systems, and report on areas where systems do not match expected settings. Configuration management tools often start with baseline configurations for representative systems or operating system types throughout an organization. For example, you might choose to configure a baseline configuration for Windows 11 desktops, Windows 11 laptops, and macOS laptops. Those standards can then be modified for specific groups, teams, divisions, or even individual users as needed by placing them in groups that have those modified settings. Baseline configurations are an ideal starting place to build from to help reduce complexity and make configuration management and system hardening possible across multiple machines—even thousands of them. Once baselines are set, tools support configuration enforcement, a process that not only monitors for changes but makes changes to system configurations as needed to ensure that the configuration remains in its desired state. The Security+ exam outline considers baselines through three phases of a baseline's life cycle: Establishing a baseline, which is most often done using an existing industry standard like the CIS benchmarks with modifications and adjustments made to fit the organization's needs Deploying the security baseline using central management tools or even manually depending on the scope, scale, and capabilities of the organization Maintaining the baseline by using central management tools and enforcement capabilities as well as making adjustments to the organization's baseline if required by functional needs or other changes Patching and Patch Management Ensuring that systems and software are up to date helps ensure endpoint security by removing known vulnerabilities. Timely patching decreases how long exploits and flaws can be used against systems, but patching also has its own set of risks. Patches may introduce new flaws, or the patching process itself can sometimes cause issues. The importance of patching, as well as the need to make sure that patching is controlled and managed, is where patch management comes in. A significant proportion of modern software, including software, browsers, office suites, and many other packages, has built-in automatic update tools. These tools check for an update and either notify the user requesting permission to install the updates or automatically install the update. Although this model is useful for individual devices, an enterprise solution makes sense for organizations. Patch management for operating systems can be done using tools like Microsoft's Configuration Manager for Windows, but third-party tools provide support for patch management across a wide variety of software applications. Tracking versions and patch status using a systems management tool can be important for organizational security, particularly because third-party updates for the large numbers of applications and packages that organizations are likely to have deployed can be a daunting task. A common practice for many organizations is to delay the installation of a patch for a period of a few days from its release date. That allows the patch to be installed on systems around the world, hopefully providing insight into any issues the patch may create. In cases where a known exploit exists, organizations have to choose between the risk of patching and the risk of not patching and having the exploit used against them. Testing can help, but many organizations don't have the resources to do extensive patch testing. Managing software for mobile devices remains a challenge, but mobile device management tools also include the ability to validate software versions. These tools often have the ability to update applications, and apply patches and software updates to the device operating system in addition to controlling security settings. As you consider endpoint security, think about how you will update your endpoint devices, the software and applications they run, and what information you would need to know that patching was consistent and up-to-date. Key features like reporting, the ability to determine which systems or applications get updates and when, the ability to block an update, and of course being able to force updates to be installed are all critical to enterprise patch management over the many different types of devices in our organizations today. Encryption Keeping the contents of disks secure protects data in the event that a system or disk is lost or stolen. That's where disk encryption comes in. Full-disk encryption (FDE) encrypts the disk and requires that the bootloader or a hardware device provide a decryption key and software or hardware to decrypt the drive for use. One of the most common implementations of this type of encryption is transparent encryption (sometimes called on-the-fly, or real-time, encryption). Transparent encryption implementations are largely invisible to the user, with the drive appearing to be unencrypted during use. This also means that the simplest attack against a system that uses transparent FDE is to gain access to the system while the drive is unlocked. Volume encryption (sometimes called filesystem-level encryption) protects specific volumes of the drive, allowing different trust levels and additional security beyond that provided by encrypting the entire disk with a single key. File and folder encryption methods can also be used to protect specific data, again allowing for different trust levels as well as transfer of data in secure ways. Full-disk encryption can be implemented at the hardware level using a self-encrypting drive (SED). Self-encrypting drives implement encryption capabilities in their hardware and firmware. Systems equipped with a self-encrypting drive require a key to boot from the drive, which may be entered manually or provided by a hardware token or device. Since this is a form of full-disk encryption, the same sort of attack methods work— simply find a logged-in system or one that is in sleep mode. Disk encryption does bring potential downfalls. If the encryption key is lost, the data on the drive will likely be unrecoverable since the same strong encryption that protects it will make it very unlikely that you will be able to brute-force the key and acquire the data. Technical support can be more challenging, and data corruption or other issues can have a larger impact, resulting in unrecoverable data. Despite these potential downfalls, the significant advantage of full-disk encryption is that a lost or stolen system with a fully encrypted drive can often be handled as a loss of the system instead of a loss or breach of the data that system contained. Securing Embedded and Specialized Systems Security practitioners encounter traditional computers and servers every day, but as smart devices, embedded systems, and other specialized systems continue to be built into everything from appliances, to buildings, to vehicles, and even clothing, the attack surface for organizations is growing in new ways. Wherever these systems appear, they need to be considered as part of an organization's overall security posture. Embedded Systems Embedded systems are computer systems that are built into other devices. Industrial machinery, appliances, and cars are all places where you may have encountered embedded systems. Embedded systems are often highly specialized, running customized operating systems and with very specific functions and interfaces that they expose to users. In a growing number of cases, however, they may embed a relatively capable system with Wi-Fi, cellular, or other wireless access that runs Linux or a similar, more familiar operating system. Many embedded systems use a real-time operating system (RTOS). An RTOS is an operating system that is used when priority needs to be placed on processing data as it comes in, rather than using interrupts for the operating system or waiting for tasks being processed to be handled before data is processed. Since embedded systems are widely used for industrial processes where responses must be quick, real-time operating systems are used to minimize the amount of variance in how quickly the OS accepts data and handles tasks. Embedded systems come in many types and can be so fully embedded that you may not realize that there is a system embedded in the device you are looking at. As a security professional, you need to be able to assess embedded system security and identify ways to ensure that they remain secure and usable for their intended purpose without causing the system itself to malfunction or suffer from unacceptable degradations in performance. Assessing embedded systems can be approached much as you would a traditional computer system: 1. Identify the manufacturer or type of embedded system and acquire documentation or other materials about it. 2. Determine how the embedded system interfaces with the world: does it connect to a network, to other embedded devices, or does it only have a keyboard or other physical interface? 3. If the device does provide a network connection, identify any services or access to it provided through that network connection, and how you can secure those services or the connection itself. 4. Learn about how the device is updated, if patches are available, and how and when those patches should be installed; then ensure a patching cycle is in place that matches the device's threat model and usage requirements. 5. Document what your organization would do in the event that the device has a security issue or compromise. Could you return to normal? What would happen if the device were taken offline due to that issue? Are there critical health, safety, or operational issues that might occur if the device failed or needed to be removed from service? 6. Document your findings and ensure that appropriate practices are included in your organization's operational procedures. As more and more devices appear, this effort requires more and more time. Getting ahead of the process so that security is considered as part of the acquisitions process can help, but many devices may simply show up as part of other purchases, including things like the following: Medical systems, including devices found in hospitals and at doctors’ offices, may be network connected or have embedded systems. Medical devices like pacemakers, insulin pumps, and other external or implantable systems can also be attacked, with exploits for pacemakers via Bluetooth already existing in the wild. Smart meters are deployed to track utility usage and bring with them a wireless control network managed by the utility. Since the meters are now remotely accessible and controllable, they provide a new attack surface that could interfere with power, water, or other utilities, or that could provide information about the facility or building. Vehicles ranging from cars to aircraft and even ships at sea are now network connected, and frequently are directly Internet connected. If they are not properly secured, or if the backend servers and infrastructure that support them are vulnerable, attackers can take control, monitor, or otherwise seriously impact them. Your Car as an Internet-Connected Device Vehicles are increasingly networked, including cellular connections that allow them to stay connected to the manufacturer for emergency services and even updates. Vehicles also rely on controller area network (CAN) buses, which provide communication between microcontrollers, sensors, and other devices that make up a car's systems. As with any other network, cars can be attacked and potentially compromised. Stories like www.wired.com/story/car-hack-shut-down-safety-features demonstrate that attacks against vehicles are possible and that the repercussions of a compromised vehicle may be significant. Shutting down safety features, or potentially taking complete control of a self-driving car, are within the realm of possibility! Drones and autonomous vehicles (AVs), as well as similar vehicles, may be controlled from the Internet or through wireless command channels. Encrypting their command-and-control channels and ensuring that they have appropriate controls if they are Internet or network connected are critical to their security. VoIP systems include both backend servers as well as the VoIP phones and devices that are deployed to desks and work locations throughout an organization. The phones themselves are a form of embedded system, with an operating system that can be targeted and may be vulnerable to attack. Some phones also provide interfaces that allow direct remote login or management, making them vulnerable to attack from VoIP networks. Segmenting networks to protect potentially vulnerable VoIP devices, updating them regularly, and applying baseline security standards for the device help keep VoIP systems secure. Printers, including multifunction printers (MFPs), frequently have network connectivity built in. Wireless and wired network interfaces provide direct access to the printers, and many printers have poor security models. Printers have been used as access points to protected networks, to reflect and amplify attacks, and as a means of gathering information. In fact, MFPs, copiers, and other devices that scan and potentially store information from faxes, printouts, and copies make these devices a potentially significant data leakage risk in addition to the risk they can create as vulnerable networked devices that can act as reflectors and amplifiers in attacks, or as pivot points for attackers. Surveillance systems like camera systems and related devices that are used for security but that are also networked can provide attackers with a view of what is occurring inside a facility or organization. Cameras provide embedded interfaces that are commonly accessible via a web interface. Default configurations, vulnerabilities, lack of patching, and similar issues are common with specialized systems, much the same as with other embedded systems. When you assess specialized systems, consider both how to limit the impact of these potential problems and the management, administration, and incident response processes that you would need to deal with them for your organization. SCADA and ICS Industrial and manufacturing systems are often described using one of two terms. Industrial controls systems (ICSs) is a broad term for industrial automation, and Supervisory Control and Data Acquisition (SCADA) often refers to large systems that run power and water distribution or other systems that cover large areas. Since the terms overlap, they are often used interchangeably. SCADA is a type of system architecture that combines data acquisition and control devices, computers, communications capabilities, and an interface to control and monitor the entire architecture. SCADA systems are commonly found running complex manufacturing and industrial processes, where the ability to monitor, adjust, and control the entire process is critical to success. Figure 11.6 shows a simplified overview of a SCADA system. You'll see that there are remote telemetry units (RTUs) that collect data from sensors and programmable logic controllers (PLCs) that control and collect data from industrial devices like machines or robots. Data is sent to the system control and monitoring controls, allowing operators to see what is going on and to manage the SCADA system. These capabilities mean that SCADA systems are in common use in industrial and manufacturing environments, as well as in the energy industry to monitor plants and even in the logistics industry tracking packages and complex sorting and handling systems. There are multiple meanings for the acronym RTU, including remote terminal unit, remote telemetry unit, and remote telecontrol unit. You won't need to know the differences for the exam, but you should be aware that your organization or vendor may use RTU to mean one or more of those things. Regardless of which term is in use, an RTU will use a microprocessor to control a device or to collect data from it to pass on to an ICS or SCADA system. ICS and SCADA can also be used to control and manage facilities, particularly when the facility requires management of things like heating, ventilation, and air-conditioning (HVAC) systems to ensure that the processes or systems are at the proper temperature and humidity. Since ICS and SCADA systems combine general-purpose computers running commodity operating systems with industrial devices with embedded systems and sensors, they present a complex security profile for security professionals to assess. In many cases, they must be addressed as individual components to identify their unique security needs, including things like customized industrial communication protocols and proprietary interfaces. Once those individual components are mapped and understood, their interactions and security models for the system as a whole or as major components can be designed and managed. FIGURE 11.6 A SCADA system showing PLCs and RTUs with sensors and equipment A key thing to remember when securing complex systems like this is that they are often designed without security in mind. That means that adding security may interfere with their function or that security devices may not be practical to add to the environment. In some cases, isolating and protecting ICS, SCADA, and embedded systems is one of the most effective security measures that you can adopt. Securing the Internet of Things The Internet of Things (IoT) is a broad term that describes network-connected devices that are used for automation, sensors, security, and similar tasks. IoT devices are typically a type of embedded system, but many leverage technologies like machine learning (ML), AI, cloud services, and similar capabilities to provide “smart” features. IoT devices bring a number of security and privacy concerns, and security analysts must be aware of these common issues: Poor security practices, including weak default settings, lack of network security (firewalls), exposed or vulnerable services, lack of encryption for data transfer, weak authentication, use of embedded credentials, insecure data storage, and a wide range of other poor practices. Short support lifespans—IoT devices may not be patched or updated, leaving them potentially vulnerable for most of their deployed lifespan. Vendor data-handling practice issues, including licensing and data ownership concerns, as well as the potential to reveal data to both employees and partners of the vendor and to government and other agencies without the device owner being aware. Despite these security concerns, IoT devices like sensors, building and facility automation devices, wearables, and other smart devices continue to grow in popularity. Security professionals must account for both the IoT devices that their organization procures and that staff and visitors in their facilities may bring with them. When Fitness Trackers Reveal Too Much In 2018, the United States military banned the use of fitness tracker and cellphone location data reporting applications in war zones and sensitive facilities. Although the devices themselves weren't banned, applications and features that reported GPS data and exercise details were due to what was described as significant risk to the users of the devices. The issue behind the ban was the data itself. Fitness and GPS data revealed both the routes and times that the users moved through the facilities and bases and could be used to help map the facilities. This meant that publicly available data via social media–enabled fitness applications could result in the leakage of information that would be useful to adversaries and that could allow targeted and timed attacks. As sensors, wearable devices, and other IoT and embedded systems continue to become more common, this type of exposure will increasingly be a concern. Understanding the implications of the data they gather, who has access to it, and who owns it is critical to organizational security. You can read more of the story at http://apnews.com/d29c724e1d72460fbf7c2e999992d258. Exam Note As you prepare for the exam consider how you would secure IoT, embedded devices, SCADA and ICS, and similar devices. Make sure you're aware of the challenges of protecting low-power, specialized devices that may not receive patches or support and that can have very long lifespans. Communication Considerations Many embedded and specialized systems operate in environments where traditional wired and wireless networks aren't available. As a security professional, you may need to account for different types of connectivity that are used for embedded systems. Cellular connectivity, including both existing LTE and other fourth-generation technologies as well as newer 5G network connectivity, can provide high-bandwidth access to embedded systems in many locations where a Wi-Fi network wouldn't work. Since third-party cellular providers are responsible for connectivity, embedded systems that use cellular connectivity need to be secured so that the cellular network does not pose a threat to their operation. Ensuring that they do not expose vulnerable services or applications via their cellular connections is critical to their security. Building in protections to prevent network exploits from traversing internal security boundaries such as those between wireless connectivity and local control buses is also a needed design feature. Physically securing the subscriber identity module (SIM) built into cellular-enabled devices can be surprisingly important. Documented examples of SIMs being removed and repurposed, including running up significant bills for data use after they were acquired, appear regularly in the media. SIM cloning attacks can also allow attackers to present themselves as the embedded system, allowing them to both send and receive information as a trusted system. Embedded systems may also take advantage of radio frequency protocols specifically designed for them. Zigbee is one example of a network protocol that is designed for personal area networks like those found in houses for home automation. Protocols like Zigbee and Z-wave provide low-power, peer-to-peer communications for devices that don't need the bandwidth and added features provided by Wi-Fi and Bluetooth. That means that they have limitations on range and how much data they can transfer, and that since they are designed for home automation and similar uses they do not have strong security models. As a security practitioner, you should be aware that devices that communicate using protocols like Zigbee may be deployed as part of building monitoring or other uses and are unlikely to have enterprise management, monitoring, or security capabilities. Security Constraints of Embedded Systems Embedded systems have a number of constraints that security solutions need to take into account. Since embedded systems may have limited capabilities, differing deployment and management options, and extended life cycles, they require additional thought to secure. When you consider security for embedded systems, you should take the following into account: The overall computational power and capacity of embedded systems is usually much lower than a traditional PC or mobile device. Although this may vary, embedded systems may use a low-power processor, have less memory, and have very limited storage space. That means that the compute power needed for cryptographic processing may not exist, or it may have to be balanced with other needs for CPU cycles. At the same time, limited memory and storage capacity mean that there may not be capacity to run additional security tools like a firewall, antimalware tools, or other security tools you're used to including in a design. Embedded systems may not connect to a network. They may have no network connectivity, or they may have it but due to environmental, operational, or security concerns it may not be enabled or used. In fact, since many embedded systems are deployed outside of traditional networks, or in areas where connectivity may be limited, even if they have a built-in wireless network capability, they may not have the effective range to connect to a viable network. Thus, you may encounter an inability to patch, monitor, or maintain the devices remotely. Embedded devices may need to be secured as an independent unit. Without network connectivity, CPU and memory capacity, and other elements, authentication is also likely to be impossible. In fact, authenticating to an embedded system may not be desirable due to safety or usability factors. Many of the devices you will encounter that use embedded systems are built into industrial machinery, sensors and monitoring systems, or even household appliances. Without authentication, other security models need to be identified to ensure that changes to the embedded system are authorized. Embedded systems may be very low cost, but many are effectively very high cost because they are a component in a larger industrial or specialized device. So, simply replacing a vulnerable device can be impossible, requiring compensating controls or special design decisions to be made to ensure that the devices remain secure and do not create issues for their home organization. Because of all these limitations, embedded devices may rely on implied trust. They presume that operators and those who interact with them will be doing so because they are trusted and that physical access to the device means that the user is authorized to use and potentially change settings, update the device, or otherwise modify it. The implied trust model that goes with physical access for embedded devices makes them a potential vulnerability for organizations and one that must be reviewed and designed for before they are deployed. Asset Management Whenever an asset like hardware, software, or data is acquired or created, organizations need to ensure that a complete asset management life cycle is followed to ensure the security of the asset. Acquisition and procurement processes should involve security best practices and assessment to ensure that assets have appropriate security controls or features, that the companies that provide them have appropriate controls and practices themselves, and that contracts and agreements support the acquiring organization's needs. Once assets have been acquired, they need to be added to asset inventories and tracked through their lifespan. This typically includes identifying owners or managers for devices, systems, software, and data. It may also include classification efforts, particularly for data and systems that contain data that the organization considers more sensitive or valuable. Knowing that a system contains, processes, and handles sensitive data is critical during incident response processes as well as during normal operations. Inventories of systems, asset tagging, inventory checking, and related management practices are commonly used to track assets. Adding systems and devices to management tools and adding asset inventory information to the tools is also a common practice for organizations to help them understand the assets, their classification, and ownership. Exam Note The Security+ exam outline mentions “enumeration” as well as asset inventory. Enumeration is typically associated with scanning to identify assets, and some organizations use port and vulnerability scans to help identify systems that aren't part of their inventory. If organizations do not maintain asset inventories, it can be very difficult to know what the organization has, what may be missing or lost, and where things are. It can also create significant risks as acquisition of assets may be uncontrolled and invisible to the organization, allowing risks to grow without visibility. In organizations that do not maintain asset inventories and that don't have strong asset management practices, it isn't uncommon to discover that an incident happened and nobody knew about it simply because nobody knew the asset was compromised, stolen, or disposed of with sensitive data intact. Decommissioning When systems and devices are at the end of their useful life cycle, organizations need to establish a decommissioning process. Decommissioning typically involves removing a device or system from service, removing it from inventory, and ensuring that no sensitive data remains on the system. In many cases that means dealing with the storage media or drive that the system relies on, but it may also mean dealing with the entire device or removable media. Ensuring that a disk is securely wiped when it is no longer needed and is being retired or when it will be reused is an important part of the lifespan for drives. Sanitizing drives or media involves one of two processes: wiping the data or destroying the media. Tapes and similar magnetic media have often been wiped using a degausser, which exposes the magnetic media to very strong electromagnetic fields, scrambling the patterns of bits written to the tape or drive. Degaussers are a relatively quick way to destroy the data on magnetic media. SSDs, optical media and drives, and flash drives, however, require different handling. Wiping media overwrites or discards the data in a nonrecoverable way. For hard drives and other magnetic media, this may be accomplished with a series of writes, typically of 1s or 0s, to every storage location (bit) on the drive. Various tools like Darik's Boot and Nuke (DBAN) will perform multiple passes over an entire disk to attempt to ensure that no data remains. In fact, data remanence, or data that is still on a disk after the fact, is a significant concern, particularly with SSD and other flash media that uses wear-leveling algorithms to spread wear and tear on the drive over more space than the listed capacity. Wiping SSDs using a traditional drive wipe utility that writes to all accessible locations will miss sections of the disk where copies of data may remain due to the wear-leveling process. Fortunately, tools that can use the built-in secure Erase command for drives like these can help make sure remnant data is not an issue. An even better solution in many cases is to use full-disk encryption for the full life of a drive and then simply discard the encryption key when the data is ready to be retired. Unless your organization faces advanced threats, this approach is likely to keep the data from being recoverable by even reasonably advanced malicious actors. Since wiping drives can have problems, destroying drives is a popular option for organizations that want to eliminate the risk of data loss or exposure. Shredding, pulverizing, or incinerating drives so that no data could possibly be recovered is an option, and third-party vendors specialize in providing services like these with a documented trail for each asset (drive or system) that is destroyed. Although this does cause a loss of some potential recoverable value, the remaining value of older drives is much less than the cost of a single breach in the risk equations for organizations that make this decision. An important step for many organizations is certification. Certification processes are used to document that assets were decommissioned, including certification processes for vendors who destroy devices and media. Certificates of destruction are provided as proof that assets were properly disposed of. Using Someone Else's Hand-Me-Downs What happens when drives aren't wiped? One of the authors of this book encountered exactly that situation. A system belonging to a department head was retired from their daily use and handed off to a faculty member in a department on a college campus. That faculty member used the system for some time, and after they were done with it the system was handed down again to a graduate student. Over time, multiple graduate students were issued the system and they used it until the system was compromised. At that point, the incident response team learned that all of the data from the department head, faculty member, and multiple graduate students was accessible on the system and that no data had ever been removed during its lifespan. Sensitive documents, personal email, and a variety of other information was all accessible on the system! Fortunately, although the incident response team found that the system had malware, the malware was not a type that would have leaked the data. If the system had been wiped and reinstalled at any time during its many years of service, the potential significant incident could have been avoided entirely. The department now has strong rules against hand-me-down systems and wipes and reinstalls any system before it is handed to another user. Retention While decommissioning and disposal are important, organizations often have to retain data or systems as well. Retention may be required for legal purposes with set retention periods determined by law, or retention may be associated with a legal case due to a legal hold. Retention may also serve business purposes, or have a compliance or audit component. All of this means that disposal processes need to be linked to and aware of retention policies, procedures, and categorization to ensure that data, media, and devices that are subject to retention requirements are kept and that assets that are past their retention period are properly disposed of. Retaining assets longer than the organization intends to or is required to can also lead to risks ranging from data breaches to more data being available during legal cases that would otherwise have been disposed of! Summary Endpoints are the most common devices in most organizations, and they are also the most varied. Therefore, you must pay particular attention to how they are secured from the moment they boot up. You can do this by using secure boot techniques built into the firmware of the systems themselves to preserve boot integrity while leveraging TPMs, hardware security modules, key management systems, and secure enclaves to properly handle secrets. Understanding operating system vulnerabilities and their impact in endpoint security is critical for security professionals. You also need to be aware of and understand the impact of hardware life cycles, including firmware updates and end-of-life and legacy hardware considerations. As a security professional, you need to know about the many common types of endpoint protection options that are available, as well as when and how to use them. Antivirus/antimalware software helps prevent malware infection. Host-based firewalls and IPSs, disabling ports and protocols, changing default passwords and other settings, and using security baselines are all common techniques used as part of endpoint protection. Tools like Group Policy and SELinux help to enforce and improve security baselines and capabilities. EDR and XDR tools add another layer to the set of security protections for endpoint devices by combining detection and monitoring tools with central reporting and incident response capabilities. Finally, data loss prevention (DLP) tools are deployed to ensure that data isn't inadvertently or intentionally exposed. Drive encryption keeps data secure if drives are stolen or lost. At the end of their life cycle, when devices are retired or fail, or when media needs to be reused, sanitization procedures are employed to ensure that remnant data doesn't leak. Wiping drives as well as physical destruction are both common options. It is critical to choose a sanitization method that matches the media and security level required by the data stored on the media or drive. Another important element in endpoint security is how organizations secure specialized and embedded systems. Internet of Things, SCADA, ICS, and other devices using real- time operating systems are everywhere throughout organizations in medical systems,