Chapter 05 - Security Assessment and Testing.pdf

Full Transcript

Chapter 5 Security Assessment and Testing THE COMPTIA SECURITY+ EXAM OBJECTIVES COVERED IN THIS CHAPTER INCLUDE: Domain 4.0: Security Operations 4.3. Explain various activities associated with vulnerability management. Identification methods (Vulnerability scan, Penetration testing...

Chapter 5 Security Assessment and Testing THE COMPTIA SECURITY+ EXAM OBJECTIVES COVERED IN THIS CHAPTER INCLUDE: Domain 4.0: Security Operations 4.3. Explain various activities associated with vulnerability management. Identification methods (Vulnerability scan, Penetration testing, Responsible disclosure program, Bug bounty program, System/process audit) Analysis (Confirmation, Prioritize, Common Vulnerability Scoring System (CVSS), Common Vulnerabilities and Exposures (CVE), Vulnerability classification, Exposure factor, Environmental variables, Industry/organizational impact, Risk tolerance) Vulnerability response and remediation (Patching, Insurance, Segmentation, Compensating controls, Exceptions and exemptions) Validation of remediation (Rescanning, Audit, Verification) Reporting 4.4. Explain security alerting and monitoring concepts and tools. Tools (Security Content Automation Protocol (SCAP), Vulnerability scanners) 4.8. Explain appropriate incident response activities. Threat hunting Domain 5.0: Security Program Management and Oversight 5.3. Explain processes associated with third-party risk assessment and management. Rules of engagement 5.5. Explain types and purposes of audits and assessments. Attestation Internal (Compliance, Audit committee, Self-assessments) External (Regulatory, Examinations, Assessment, Independent third-party audit) Penetration testing (Physical, Offensive, Defensive, Integrated, Known environment, Partially known environment, Unknown environment, Reconnaissance, Passive, Active) Many security threats exist in today's cybersecurity landscape. In previous chapters, you've read about the threats posed by hackers with varying motivations, malicious code, and social engineering. Cybersecurity professionals are responsible for building, operating, and maintaining security controls that protect against these threats. An important component of this maintenance is performing regular security assessment and testing to ensure that controls are operating properly and that the environment contains no exploitable vulnerabilities. This chapter begins with a discussion of vulnerability management, including the design, scheduling, and interpretation of vulnerability scans. It then moves on to discuss penetration testing, an assessment tool that puts cybersecurity professionals in the role of attackers to test security controls. The chapter concludes with a discussion of cybersecurity exercises that may be used as part of an ongoing training and assessment program. Vulnerability Management Our technical environments are complex. We operate servers, endpoint systems, network devices, and many other components that each runs millions of lines of code and processes complex configurations. No matter how much we work to secure these systems, it is inevitable that they will contain vulnerabilities and that new vulnerabilities will arise on a regular basis. Vulnerability management programs play a crucial role in identifying, prioritizing, and remediating vulnerabilities in our environments. They use vulnerability scanning to detect new vulnerabilities as they arise and then implement a remediation workflow that addresses the highest-priority vulnerabilities. Every organization should incorporate vulnerability management into their cybersecurity program. Identifying Scan Targets Once an organization decides that it wishes to conduct vulnerability scanning and determines which, if any, regulatory requirements apply to their scans, they move on to the more detailed phases of the planning process. The next step is to identify the systems that will be covered by the vulnerability scans. Some organizations choose to cover all systems in their scanning process, whereas others scan systems differently (or not at all) depending on the answers to many different questions, including: What is the data classification of the information stored, processed, or transmitted by the system? Is the system exposed to the Internet or other public or semipublic networks? What services are offered by the system? Is the system a production, test, or development system? Organizations also use automated techniques to identify the systems that may be covered by a scan. Cybersecurity professionals use scanning tools to search the network for connected systems, whether they were previously known or unknown, and to build an asset inventory. Figure 5.1 shows an example of an asset map developed using the Qualys vulnerability scanner's asset inventory functionality. FIGURE 5.1 Qualys asset map Administrators may then supplement this inventory with additional information about the type of system and the information it handles. This information then helps make determinations about which systems are critical and which are noncritical. Asset inventory and asset criticality information helps guide decisions about the types of scans that are performed, the frequency of those scans, and the priority administrators should place on remediating vulnerabilities detected by the scan. Determining Scan Frequency Cybersecurity professionals depend on automation to help them perform their duties in an efficient, effective manner. Vulnerability scanning tools allow the automated scheduling of scans to take the burden off administrators. Figure 5.2 shows an example of how these scans might be configured in Tenable's Nessus product. Nessus was one of the first vulnerability scanners on the market and remains widely used today. Administrators may designate a schedule that meets their security, compliance, and business requirements. FIGURE 5.2 Configuring a Nessus scan Administrators should configure these scans to provide automated alerting when they detect new vulnerabilities. Many security teams configure their scans to produce automated email reports of scan results, such as the report shown in Figure 5.3. Many different factors influence how often an organization decides to conduct vulnerability scans against its systems: The organization's risk appetite is its willingness to tolerate risk within the environment. If an organization is extremely risk averse, it may choose to conduct scans more frequently to minimize the amount of time between when a vulnerability comes into existence and when it is detected by a scan. FIGURE 5.3 Sample Nessus scan report Regulatory requirements, such as those imposed by the Payment Card Industry Data Security Standard (PCI DSS) or the Federal Information Security Modernization Act (FISMA), may dictate a minimum frequency for vulnerability scans. These requirements may also come from corporate policies. Technical constraints may limit the frequency of scanning. For example, the scanning system may only be capable of performing a certain number of scans per day, and organizations may need to adjust scan frequency to ensure that all scans complete successfully. Business constraints may limit the organization from conducting resource-intensive vulnerability scans during periods of high business activity to avoid disruption of critical processes. Licensing limitations may curtail the bandwidth consumed by the scanner or the number of scans that may be conducted simultaneously. Cybersecurity professionals must balance each of these considerations when planning a vulnerability scanning program. It is usually wise to begin small and slowly expand the scope and frequency of vulnerability scans over time to avoid overwhelming the scanning infrastructure or enterprise systems. Configuring Vulnerability Scans Vulnerability management solutions provide administrators with the ability to configure many different parameters related to scans. In addition to scheduling automated scans and producing reports, administrators may customize the types of checks performed by the scanner, provide credentials to access target servers, install scanning agents on target servers, and conduct scans from a variety of network perspectives. It is important to conduct regular configuration reviews of vulnerability scanners to ensure that scan settings match current requirements. Scan Sensitivity Levels Cybersecurity professionals configuring vulnerability scans should pay careful attention to the configuration settings related to the scan sensitivity level. These settings determine the types of checks that the scanner will perform and should be customized to ensure that the scan meets its objectives while minimizing the possibility of disrupting the target environment. Typically, administrators create a new scan by beginning with a template. This may be a template provided by the vulnerability management vendor and built into the product, such as the Nessus scan templates shown in Figure 5.4, or it may be a custom-developed template created for use within the organization. As administrators create their own scan configurations, they should consider saving common configuration settings in templates to allow efficient reuse of their work, saving time and reducing errors when configuring future scans. FIGURE 5.4 Nessus scan templates Administrators may also improve the efficiency of their scans by configuring the specific plug-ins that will run during each scan. Each plug-in performs a check for a specific vulnerability, and these plug-ins are often grouped into families based on the operating system, application, or device that they involve. Disabling unnecessary plug-ins improves the speed of the scan by bypassing unnecessary checks and also may reduce the number of false positive results detected by the scanner. For example, an organization that does not use the Amazon Linux operating system may choose to disable all checks related to Amazon Linux in their scanning template. Figure 5.5 shows an example of disabling these plug-ins in Nessus. FIGURE 5.5 Disabling unused plug-ins Some plug-ins perform tests that may actually disrupt activity on a production system or, in the worst case, damage content on those systems. These intrusive plug-ins are a tricky situation. Administrators want to run these scans because they may identify problems that could be exploited by a malicious source. At the same time, cybersecurity professionals clearly don't want to cause problems on the organization's network and, as a result, may limit their scans to nonintrusive plug-ins. One way around this problem is to maintain a test environment containing copies of the same systems running on the production network and running scans against those test systems first. If the scans detect problems in the test environment, administrators may correct the underlying causes on both test and production networks before running scans on the production network. Supplementing Network Scans Basic vulnerability scans run over a network, probing a system from a distance. This provides a realistic view of the system's security by simulating what an attacker might see from another network vantage point. However, the firewalls, intrusion prevention systems, and other security controls that exist on the path between the scanner and the target server may affect the scan results, providing an inaccurate view of the server's security independent of those controls. Additionally, many security vulnerabilities are difficult to confirm using only a remote scan. Vulnerability scans that run over the network may detect the possibility that a vulnerability exists but be unable to confirm it with confidence, causing a false positive result that requires time-consuming administrator investigation. Modern vulnerability management solutions can supplement these remote scans with trusted information about server configurations. This information may be gathered in two ways. First, administrators can provide the scanner with credentials that allow the scanner to connect to the target server and retrieve configuration information. This information can then be used to determine whether a vulnerability exists, improving the scan's accuracy over noncredentialed alternatives. For example, if a vulnerability scan detects a potential issue that can be corrected by an operating system update, the credentialed scan can check whether the update is installed on the system before reporting a vulnerability. Figure 5.6 shows an example of the credentialed scanning options available within Qualys. Credentialed scans may access operating systems, databases, and applications, among other sources. FIGURE 5.6 Configuring credentialed scanning Exam Note Credentialed scans typically only retrieve information from target servers and do not make changes to the server itself. Therefore, administrators should enforce the principle of least privilege by providing the scanner with a read-only account on the server. This reduces the likelihood of a security incident related to the scanner's credentialed access. As you prepare for the Security+ exam, be certain that you understand the differences between credentialed and noncredentialed scanning! In addition to credentialed scanning, some scanners supplement the traditional server- based scanning approach to vulnerability scanning with a complementary agent-based scanning approach. In this approach, administrators install small software agents on each target server. These agents conduct scans of the server configuration, providing an “inside-out” vulnerability scan, and then report information back to the vulnerability management platform for analysis and reporting. System administrators are typically wary of installing agents on the servers that they manage for fear that the agent will cause performance or stability issues. If you choose to use an agent-based approach to scanning, you should approach this concept conservatively, beginning with a small pilot deployment that builds confidence in the agent before proceeding with a more widespread deployment. Scan Perspective Comprehensive vulnerability management programs provide the ability to conduct scans from a variety of scan perspectives. Each scan perspective conducts the scan from a different location on the network, providing a different view into vulnerabilities. For example, an external scan is run from the Internet, giving administrators a view of what an attacker located outside the organization would see as potential vulnerabilities. Internal scans might run from a scanner on the general corporate network, providing the view that a malicious insider might encounter. Finally, scanners located inside the datacenter and agents located on the servers offer the most accurate view of the real state of the server by showing vulnerabilities that might be blocked by other security controls on the network. Controls that might affect scan results include the following: Firewall settings Network segmentation Intrusion detection systems (IDSs) Intrusion prevention systems (IPSs) The internal and external scans required by PCI DSS are a good example of scans performed from different perspectives. The organization may conduct its own internal scans but must supplement them with external scans conducted by an Approved Scanning Vendor (ASV). Vulnerability management platforms have the ability to manage different scanners and provide a consolidated view of scan results, compiling data from different sources. Figure 5.7 shows an example of how the administrator may select the scanner for a newly configured scan using Qualys. FIGURE 5.7 Choosing a scan appliance Scanner Maintenance As with any technology product, vulnerability management solutions require maintenance. Administrators should conduct regular maintenance of their vulnerability scanner to ensure that the scanning software and vulnerability feeds remain up-to-date. Scanning systems do provide automatic updating capabilities that keep the scanner and its vulnerability feeds up-to-date. Organizations can and should take advantage of these features, but it is always a good idea to check in once in a while and manually verify that the scanner is updating properly. Scanner Software Scanning systems themselves aren't immune from vulnerabilities. As shown in Figure 5.8, even vulnerability scanners can have security issues! Regular patching of scanner software protects an organization against scanner-specific vulnerabilities and also provides important bug fixes and feature enhancements to improve scan quality. FIGURE 5.8 Nessus vulnerability in the NIST National Vulnerability Database Source: National Institute of Standards and Technology Vulnerability Plug-in Feeds Security researchers discover new vulnerabilities every week, and vulnerability scanners can only be effective against these vulnerabilities if they receive frequent updates to their plug-ins. Administrators should configure their scanners to retrieve new plug-ins on a regular basis, preferably daily. Fortunately, as shown in Figure 5.9, this process is easily automated. FIGURE 5.9 Nessus Automatic Updates Security Content Automation Protocol (SCAP) The Security Content Automation Protocol (SCAP) is an effort by the security community, led by the National Institute of Standards and Technology (NIST), to create a standardized approach for communicating security-related information. This standardization is important to the automation of interactions between security components. The SCAP standards include the following: Common Configuration Enumeration (CCE) Provides a standard nomenclature for discussing system configuration issues Common Platform Enumeration (CPE) Provides a standard nomenclature for describing product names and versions Common Vulnerabilities and Exposures (CVE) Provides a standard nomenclature for describing security-related software flaws Common Vulnerability Scoring System (CVSS) Provides a standardized approach for measuring and describing the severity of security- related software flaws Extensible Configuration Checklist Description Format (XCCDF) A language for specifying checklists and reporting checklist results Open Vulnerability and Assessment Language (OVAL) A language for specifying low-level testing procedures used by checklists For more information on SCAP, see NIST SP 800-126 Rev 3: The Technical Specification for the Security Content Automation Protocol (SCAP): SCAP Version 1.3 (http://csrc.nist.gov/publications/detail/sp/800-126/rev-3/final) or the SCAP website (csrc.nist.gov/projects/security-content-automation-protocol). Exam Note At the time this book went to press, the CompTIA exam objectives referred to CVE as Common Vulnerability Enumeration. This is an older definition of the acronym CVE and most industry professionals use the term Common Vulnerabilities and Exposures, so that is what we use throughout this book. The difference in terminology isn't really significant as long as you remember the purpose of CVE is to provide a standard naming system for flaws. Vulnerability Scanning Tools As you develop your cybersecurity toolkit, you will want to have a network vulnerability scanner, an application scanner, and a web application scanner available for use. Vulnerability scanners are often leveraged for preventive scanning and testing and are also found in penetration testers toolkits, where they help identify systems that testers can exploit. This fact also means they're a favorite tool of attackers! Infrastructure Vulnerability Scanning Network vulnerability scanners are capable of probing a wide range of network- connected devices for known vulnerabilities. They reach out to any systems connected to the network, attempt to determine the type of device and its configuration, and then launch targeted tests designed to detect the presence of any known vulnerabilities on those devices. The following tools are examples of network vulnerability scanners: Tenable's Nessus is a well-known and widely respected network vulnerability scanning product that was one of the earliest products in this field. Qualys vulnerability scanner is a more recently developed commercial network vulnerability scanner that offers a unique deployment model using a software-as-a- service (SaaS) management console to run scans using appliances located both in on-premises datacenters and in the cloud. Rapid7's Nexpose is another commercial vulnerability management system that offers capabilities similar to those of Nessus and Qualys. The open source OpenVAS offers a free alternative to commercial vulnerability scanners. These are four of the most commonly used network vulnerability scanners. Many other products are on the market today, and every mature organization should have at least one scanner in its toolkit. Many organizations choose to deploy two different vulnerability scanning products in the same environment as a defense-in-depth control. Application Testing Application testing tools are commonly used as part of the software development process. These tools analyze custom-developed software to identify common security vulnerabilities. Application testing occurs using three techniques: Static testing analyzes code without executing it. This approach points developers directly at vulnerabilities and often provides specific remediation suggestions. Dynamic testing executes code as part of the test, running all the interfaces that the code exposes to the user with a variety of inputs, searching for vulnerabilities. Interactive testing combines static and dynamic testing, analyzing the source code while testers interact with the application through exposed interfaces. Application testing should be an integral part of the software development process. Many organizations introduce testing requirements into the software release process, requiring clean tests before releasing code into production. Web Application Scanning Web application vulnerability scanners are specialized tools used to examine the security of web applications. These tools test for web-specific vulnerabilities, such as SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF) vulnerabilities. They work by combining traditional network scans of web servers with detailed probing of web applications using such techniques as sending known malicious input sequences and fuzzing in attempts to break the application. Nikto is a popular web application scanning tool. It is an open source tool that is freely available for anyone to use. As shown in Figure 5.10, it uses a command-line interface and is somewhat difficult to use. FIGURE 5.10 Nikto web application scanner Another open source tool available for web application scanning is Arachni. This tool, shown in Figure 5.11, is a packaged scanner available for Windows, macOS, and Linux operating systems. Most organizations do use web application scanners, but they choose to use commercial products that offer advanced capabilities and user-friendly interfaces. Although there are dedicated web application scanners, such as Acunetix, on the market, many firms use the web application scanning capabilities of traditional network vulnerability scanners, such as Nessus, Qualys, and Nexpose. FIGURE 5.11 Arachni web application scanner Reviewing and Interpreting Scan Reports Vulnerability scan reports provide analysts with a significant amount of information that assists with the interpretation of the report. These reports provide detailed information about each vulnerability that they identify. Figure 5.12 shows an example of a single vulnerability reported by the Nessus vulnerability scanner. Let's take a look at this report, section by section, beginning in the top left and proceeding in a counterclockwise fashion. At the very top of the report, we see two critical details: the name of the vulnerability, which offers a descriptive title, and the overall severity of the vulnerability, expressed as a general category, such as low, medium, high, or critical. In this example report, the scanner is reporting that a server is running an outdated and insecure version of the SSL protocol. It is assigned to the high severity category. Next, the report provides a detailed description of the vulnerability. In this case, the report provides a detailed description of the flaws in the SSL protocol and explains that SSL is no longer considered acceptable for use. The next section of the report provides a solution to the vulnerability. When possible, the scanner offers detailed information about how system administrators, security professionals, network engineers, and/or application developers may correct the vulnerability. In this case, the reader is instructed to disable SSL 2.0 and 3.0 and replace their use with a secure version of the TLS protocol. Due to security vulnerabilities in early versions of TLS, you should use TLS version 1.2 or higher. FIGURE 5.12 Nessus vulnerability scan report In the section of the report titled “See Also,” the scanner provides references where administrators can find more details on the vulnerability described in the report. In this case, the scanner refers the reader to several blog posts, Nessus documentation pages, and Internet Engineering Task Force (IETF) documents that provide more details on the vulnerability. The output section of the report shows the detailed information returned by the remote system when probed for the vulnerability. This information can be extremely valuable to an analyst because it often provides the verbatim output returned by a command. Analysts can use this to better understand why the scanner is reporting a vulnerability, identify the location of a vulnerability, and potentially identify false positive reports. In this case, the output section shows the specific insecure ciphers being used. The port/hosts section provides details on the server(s) that contain the vulnerability as well as the specific services on that server that have the vulnerability. In this case, the server's IP address is obscured for privacy reasons, but we can see that the server is running insecure versions of SSL on both ports 443 and 4433. The vulnerability information section provides some miscellaneous information about the vulnerability. In this case, we see that the SSL vulnerability has appeared in news reports. The risk information section includes useful information for assessing the severity of the vulnerability. In this case, the scanner reports that the vulnerability has an overall risk factor of High (consistent with the tag next to the vulnerability title). It also provides details on how the vulnerability rates when using the Common Vulnerability Scoring System (CVSS). You'll notice that there are two different CVSS scores and vectors. We will use the CVSS version 3 information, as it is the more recent rating scale. In this case, the vulnerability has a CVSS base score of 7.5 and has the CVSS vector CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N We'll discuss the details of CVSS scoring in the next section of this chapter. The final section of the vulnerability report provides details on the vulnerability scanner plug-in that detected the issue. This vulnerability was reported by Nessus plug-in ID 20007, which was published in October 2005 and updated in March 2019. Understanding CVSS The Common Vulnerability Scoring System (CVSS) is an industry standard for assessing the severity of security vulnerabilities. It provides a technique for scoring each vulnerability on a variety of measures. Cybersecurity analysts often use CVSS ratings to prioritize response actions. Exam Note Remember that CVSS is a component of SCAP. It is a publicly available framework that provides a score from 0 to 10 indicating the severity of a vulnerability. Also know that security analysts often refer to the Common Vulnerabilities and Exposures (CVE), which is a list of publicly known vulnerabilities that contain an ID number, description, and reference. Analysts scoring a new vulnerability begin by rating the vulnerability on eight different measures. Each measure is given both a descriptive rating and a numeric score. The first four measures evaluate the exploitability of the vulnerability, whereas the last three evaluate the impact of the vulnerability. The eighth metric discusses the scope of the vulnerability. Attack Vector Metric The attack vector metric (AV) describes how an attacker would exploit the vulnerability and is assigned according to the criteria shown in Table 5.1. TABLE 5.1 CVSS attack vector metric Value Description Score Physical The attacker must physically touch the vulnerable device. 0.20 (P) The attacker must have physical or logical access to the affected Local (L) 0.55 system. Adjacent The attacker must have access to the local network that the affected 0.62 (A) system is connected to. Network The attacker can exploit the vulnerability remotely over a network. 0.85 (N) Attack Complexity Metric The attack complexity metric (AC) describes the difficulty of exploiting the vulnerability and is assigned according to the criteria shown in Table 5.2. TABLE 5.2 CVSS attack complexity metric Value Description Score High Exploiting the vulnerability requires “specialized” conditions that would 0.44 (H) be difficult to find. Low Exploiting the vulnerability does not require any specialized conditions. 0.77 (L) Privileges Required Metric The privileges required metric (PR) describes the type of account access that an attacker would need to exploit a vulnerability and is assigned according to the criteria in Table 5.3. TABLE 5.3 CVSS privileges required metric Value Description Score High Attackers require administrative privileges to 0.270 (or 0.50 if Scope is (H) conduct the attack. Changed) Low Attackers require basic user privileges to conduct 0.62 (or 0.68 if Scope is (L) the attack. Changed) None Attackers do not need to authenticate to exploit the 0.85 (N) vulnerability. User Interaction Metric The user interaction metric (UI) describes whether the attacker needs to involve another human in the attack. The user interaction metric is assigned according to the criteria in Table 5.4. TABLE 5.4 CVSS user interaction metric Value Description Score Successful exploitation does not require action by any user other than None (N) 0.85 the attacker. Required Successful exploitation does require action by a user other than the 0.62 (R) attacker. Confidentiality Metric The confidentiality metric (C) describes the type of information disclosure that might occur if an attacker successfully exploits the vulnerability. The confidentiality metric is assigned according to the criteria in Table 5.5. TABLE 5.5 CVSS confidentiality metric ValueDescription Score None There is no confidentiality impact. 0.00 (N) Low Access to some information is possible, but the attacker does not have 0.22 (L) control over what information is compromised. High All information on the system is compromised. 0.56 (H) Integrity Metric The integrity metric (I) describes the type of information alteration that might occur if an attacker successfully exploits the vulnerability. The integrity metric is assigned according to the criteria in Table 5.6. TABLE 5.6 CVSS integrity metric ValueDescription Score None There is no integrity impact. 0.00 (N) Low Modification of some information is possible, but the attacker does not 0.22 (L) have control over what information is modified. High The integrity of the system is totally compromised, and the attacker may 0.56 (H) change any information at will. Availability Metric The availability metric (A) describes the type of disruption that might occur if an attacker successfully exploits the vulnerability. The availability metric is assigned according to the criteria in Table 5.7. TABLE 5.7 CVSS availability metric Value Description Score None (N) There is no availability impact. 0.00 The performance of the system is Low (L) 0.22 degraded. High (H) The system is completely shut down. 0.56 Scope Metric The scope metric (S) describes whether the vulnerability can affect system components beyond the scope of the vulnerability. The scope metric is assigned according to the criteria in Table 5.8. Note that the scope metric table does not contain score information. The value of the scope metric is reflected in the values for the privileges required metric, shown earlier in Table 5.3. TABLE 5.8 CVSS scope metric Value Description Unchanged The exploited vulnerability can only affect resources managed by the same (U) security authority. Changed The exploited vulnerability can affect resources beyond the scope of the (C) security authority managing the component containing the vulnerability. The current version of CVSS is version 3.1, which is a minor update from version 3.0. You will find that attack vectors normally cite version 3.0. This chapter uses CVSS version 3.1 as the basis of our conversation, but 3.0 and 3.1 are functionally equivalent for our purposes. You may still find documentation that references CVSS version 2, which uses a similar methodology but has different ratings and only six metrics. Interpreting the CVSS Vector The CVSS vector uses a single-line format to convey the ratings of a vulnerability on all eight of the metrics described in the preceding sections. For example, recall the CVSS vector for the vulnerability presented in Figure 5.12: CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N This vector contains nine components. The first section, “CVSS:3.0,” simply informs the reader (human or system) that the vector was composed using CVSS version 3. The next eight sections correspond to each of the eight CVSS metrics. In this case, the SSL vulnerability in Figure 5.12 received the following ratings: Attack Vector: Network (score: 0.85) Attack Complexity: Low (score: 0.77) Privileges Required: None (score: 0.85) User Interaction: None (score: 0.85) Scope: Unchanged Confidentiality: High (score: 0.56) Integrity: None (score: 0.00) Availability: None (score: 0.00) Summarizing CVSS Scores The CVSS vector provides good detailed information on the nature of the risk posed by a vulnerability, but the complexity of the vector makes it difficult to use in prioritization exercises. For this reason, analysts can calculate the CVSS base score, which is a single number representing the overall risk posed by the vulnerability. Arriving at the base score requires first calculating some other CVSS component scores. CALCULATING THE IMPACT SUB-SCORE (ISS) The first calculation analysts perform is computing the impact sub-score (ISS). This metric summarizes the three impact metrics using the formula Plugging in the values for our SSL vulnerability, we obtain CALCULATING THE IMPACT SCORE To obtain the impact score from the impact sub-score, we must take the value of the scope metric into account. If the scope metric is Unchanged, as it is in our example, we multiply the ISS by 6.42: If the scope metric is Changed, we use a more complex formula: CALCULATING THE EXPLOITABILITY SCORE Analysts may calculate the exploitability score for a vulnerability using this formula: Plugging in values for our SSL vulnerability, we get CALCULATING THE BASE SCORE With all of this information at hand, we can now determine the CVSS base score using the following rules: If the impact is 0, the base score is 0. If the scope metric is Unchanged, calculate the base score by adding together the impact and exploitability scores. If the scope metric is Changed, calculate the base score by adding together the impact and exploitability scores and multiplying the result by 1.08. The highest possible base score is 10. If the calculated value is greater than 10, set the base score to 10. In our example, the impact score is 3.60 and the exploitability score rounds to 3.9. Adding these together, we get a base score of 7.5, which is the same value found in Figure 5.12. Now that you understand the math behind CVSS scores, the good news is that you don't need to perform these calculations by hand. NIST offers a CVSS calculator at www.first.org/cvss/calculator/3.1, where you can easily compute the CVSS base score for a vulnerability. CATEGORIZING CVSS BASE SCORES Many vulnerability scanning systems further summarize CVSS results by using risk categories rather than numeric risk ratings. These are usually based on the CVSS Qualitative Severity Rating Scale, shown in Table 5.9. TABLE 5.9 CVSS Qualitative Severity Rating Scale CVSS scoreRating 0.0 None 0.1–3.9 Low 4.0–6.9 Medium 7.0–8.9 High 9.0–10.0 Critical Continuing with the SSL vulnerability example from Figure 5.12, we calculated the CVSS score for this vulnerability as 7.5. This places it into the High risk category, as shown in the header of Figure 5.12. Exam Note Be sure you are familiar with the CVSS severity rating scale for the exam. These scores are a common topic for exam questions! Confirmation of Scan Results Cybersecurity analysts interpreting reports often perform their own investigations to confirm the presence and severity of vulnerabilities. These investigations may include the use of external data sources that supply additional information valuable to the analysis. False Positives Vulnerability scanners are useful tools, but they aren't foolproof. Scanners do sometimes make mistakes for a variety of reasons. The scanner might not have sufficient access to the target system to confirm a vulnerability, or it might simply have an error in a plug-in that generates an erroneous vulnerability report. When a scanner reports a vulnerability that does not exist, this is known as a false positive error. When a vulnerability scanner reports a vulnerability, this is known as a positive report. This report may either be accurate (a true positive report) or inaccurate (a false positive report). Similarly, when a scanner reports that a vulnerability is not present, this is a negative report. The negative report may either be accurate (a true negative report) or inaccurate (a false negative report). Exam Note As you prepare for the exam, focus on the topics of false positives and false negatives. You must understand the types of errors that might occur in vulnerability reports and be prepared to identify them in scenarios on the exam. Cybersecurity analysts should confirm each vulnerability reported by a scanner. In some cases, this may be as simple as verifying that a patch is missing or an operating system is outdated. In other cases, verifying a vulnerability requires a complex manual process that simulates an exploit. For example, verifying a SQL injection vulnerability may require actually attempting an attack against a web application and verifying the result in the backend database. When verifying a vulnerability, analysts should draw on their own expertise as well as the subject matter expertise of others throughout the organization. Database administrators, system engineers, network technicians, software developers, and other experts have domain knowledge that is essential to the evaluation of a potential false positive report. Reconciling Scan Results with Other Data Sources Vulnerability scans should never take place in a vacuum. Cybersecurity analysts interpreting these reports should also turn to other sources of security information as they perform their analysis. Valuable information sources for this process include the following: Log reviews from servers, applications, network devices, and other sources that might contain information about possible attempts to exploit detected vulnerabilities Security information and event management (SIEM) systems that correlate log entries from multiple sources and provide actionable intelligence Configuration management systems that provide information on the operating system and applications installed on a system Each of these information sources can prove invaluable when an analyst attempts to reconcile a scan report with the reality of the organization's computing environment. Vulnerability Classification Each vulnerability scanning system contains plug-ins able to detect thousands of possible vulnerabilities, ranging from major SQL injection flaws in web applications to more mundane information disclosure issues with network devices. Though it's impossible to discuss each of these vulnerabilities in a book of any length, cybersecurity analysts should be familiar with the most commonly detected vulnerabilities and some of the general categories that cover many different vulnerability variants. Patch Management Applying security patches to systems should be one of the core practices of any information security program, but this routine task is often neglected due to a lack of resources for preventive maintenance. One of the most common alerts from a vulnerability scan is that one or more systems on the network are running an outdated version of an operating system or application and require security patches. Figure 5.13 shows an example of one of these scan results. The server in this report has a remote code execution vulnerability. Though the scan result is fairly brief, it does contain quite a bit of helpful information. Fortunately, there is an easy way to fix this problem. The Solution section tells us that Microsoft corrected the issue with app version 2.0.32791.0 or later, and the See Also section provides a direct link to the Microsoft security bulletin (CVE-2022-44687) that describes the issue and solution in greater detail. The vulnerability shown in Figure 5.13 highlights the importance of operating a patch management program that routinely patches security issues. The issue shown in Figure 5.13 exposes improper or weak patch management at the operating system level, but these weaknesses can also exist in applications and firmware. FIGURE 5.13 Missing patch vulnerability Legacy Platforms Software vendors eventually discontinue support for every product they make. This is true for operating systems as well as applications. Once they announce the final end of support for a product, organizations that continue running the outdated software put themselves at a significant risk of attack. The vendor simply will not investigate or correct security flaws that arise in the product after that date. Organizations continuing to run the unsupported product are on their own from a security perspective, and unless you happen to maintain a team of operating system developers, that's not a good situation to find yourself in. Perhaps the most famous end of support for a major operating system occurred in July 2015 when Microsoft discontinued support for the more than two decades old Windows Server 2003. Figure 5.14 shows an example of the report generated by Nessus when it identifies a server running this outdated operating system. We can see from this report that the scan detected two servers on the network running Windows Server 2003. The description of the vulnerability provides a stark assessment of what lies in store for organizations continuing to run any unsupported operating system: Lack of support implies that no new security patches for the product will be released by the vendor. As a result, it is likely to contain security vulnerabilities. Furthermore, Microsoft is unlikely to investigate or acknowledge reports of vulnerabilities. FIGURE 5.14 Unsupported operating system vulnerability The solution for organizations running unsupported operating systems is simple in its phrasing but complex in implementation. “Upgrade to a version of Windows that is currently supported” is a pretty straightforward instruction, but it may pose a significant challenge for organizations running applications that simply can't be upgraded to newer versions of Windows. In cases where the organization simply must continue using an unsupported operating system, best practice dictates isolating the system as much as possible, preferably not connecting it to any network, and applying as many compensating security controls as possible, such as increased monitoring and implementing strict network firewall rules. Exam Note Remember that good vulnerability response and remediation practices include patching, insurance, segmentation, compensating controls, exceptions, and exemptions. Weak Configurations Vulnerability scans may also highlight weak configuration settings on systems, applications, and devices. These weak configurations may include the following: The use of default settings that pose a security risk, such as administrative setup pages that are meant to be disabled before moving a system to production. The presence of default credentials or unsecured accounts, including both normal user accounts and unsecured root accounts with administrative privileges. Accounts may be considered unsecured when they either lack strong authentication or use default passwords. Open service ports that are not necessary to support normal system operations. This will vary based on the function of a server or device but, in general, a system should expose only the minimum number of services necessary to carry out its function. Open permissions that allow users access that violates the principle of least privilege. These are just a few examples of the many weak configuration settings that may jeopardize security. You'll want to carefully read the results of vulnerability scans to identify other issues that might arise in your environment. Error Messages Many application development platforms support debug modes that give developers crucial error information needed to troubleshoot applications in the development process. Debug mode typically provides detailed information on the inner workings of an application and server, as well as supporting databases. Although this information can be useful to developers, it can inadvertently assist an attacker seeking to gain information about the structure of a database, authentication mechanisms used by an application, or other details. For this reason, vulnerability scans do alert on the presence of debug mode on scanned servers. Figure 5.15 shows an example of this type of scan result. In this example, the target system appears to be a Windows Server supporting the ASP.NET development environment. The Output section of the report demonstrates that the server responds when sent a DEBUG request by a client. Solving this issue requires the cooperation of developers and disabling debug modes on systems with public exposure. In mature organizations, software development should always take place in a dedicated development environment that is only accessible from private networks. Developers should be encouraged (or ordered!) to conduct their testing only on systems dedicated to that purpose, and it would be entirely appropriate to enable debug mode on those servers. There should be no need for supporting this capability on public-facing systems. FIGURE 5.15 Debug mode vulnerability Insecure Protocols Many of the older protocols used on networks in the early days of the Internet were designed without security in mind. They often failed to use encryption to protect usernames, passwords, and the content sent over an open network, exposing the users of the protocol to eavesdropping attacks. Telnet is one example of an insecure protocol used to gain command-line access to a remote server. The File Transfer Protocol (FTP) provides the ability to transfer files between systems but does not incorporate security features. Figure 5.16 shows an example of a scan report that detected a system that supports the insecure FTP protocol. The solution for this issue is to simply switch to a more secure protocol. Fortunately, encrypted alternatives exist for both Telnet and FTP. System administrators can use Secure Shell (SSH) as a secure replacement for Telnet when seeking to gain command- line access to a remote system. Similarly, the Secure File Transfer Protocol (SFTP) and FTP-Secure (FTPS) both provide a secure method to transfer files between systems. FIGURE 5.16 FTP cleartext authentication vulnerability Weak Encryption Encryption is a crucial security control used in every cybersecurity program to protect stored data and data in transit over networks. As with any control, however, encryption must be configured securely to provide adequate protection. You'll learn more about securely implementing encryption in Chapter 7, “Cryptography and the PKI.” When you implement encryption, you have two important choices to make: The algorithm to use to perform encryption and decryption The encryption key to use with that algorithm The choices that you make for both of these characteristics may have a profound impact on the security of your environment. If you use a weak encryption algorithm, it may be easily defeated by an attacker. If you choose an encryption key that is easily guessable because of its length or composition, an attacker may find it using a cryptographic attack. For example, Figure 5.17 shows a scan report from a system that supports the insecure RC4 cipher. This system should be updated to support a secure cipher, such as the Advanced Encryption Standard (AES). FIGURE 5.17 Insecure SSL cipher vulnerability Penetration Testing Penetration testing seeks to bridge the gap between the rote use of technical tools to test an organization's security and the power of those tools when placed in the hands of a skilled and determined attacker. Penetration tests are authorized, legal attempts to defeat an organization's security controls and perform unauthorized activities. These tests are time-consuming and require staff who are as equally skilled and determined as the real-world attackers that will attempt to compromise the organization. However, they're also the most effective way for an organization to gain a complete picture of their security vulnerability. Adopting the Hacker Mindset In Chapter 1, “Today's Security Professional,” you learned about the CIA triad and how the goals of confidentiality, integrity, and availability are central to the field of cybersecurity. Cybersecurity defenders do spend the majority of their time thinking in these terms, designing controls and defenses to protect information and systems against a wide array of known and unknown threats to confidentiality, integrity, and availability. Penetration testers must take a very different approach in their thinking. Instead of trying to defend against all possible threats, they only need to find a single vulnerability that they might exploit to achieve their goals. To find these flaws, they must think like the adversary who might attack the system in the real world. This approach is commonly known as adopting the hacker mindset. Before we explore the hacker mindset in terms of technical systems, let's explore it using an example from the physical world. If you were responsible for the physical security of an electronics store, you might consider a variety of threats and implement controls designed to counter those threats. You'd be worried about shoplifting, robbery, and employee embezzlement, among other threats, and you might build a system of security controls that seeks to prevent those threats from materializing. These controls might include the following: Security cameras in high-risk areas Auditing of cash register receipts Theft detectors at the main entrance/exit to the store Exit alarms on emergency exits Burglar alarm wired to detect the opening of doors outside business hours Now, imagine that you've been engaged to conduct a security assessment of this store. You'd likely examine each one of these security controls and assess its ability to prevent each of the threats identified in your initial risk assessment. You'd also look for gaps in the existing security controls that might require supplementation. Your mandate is broad and high-level. Penetration tests, on the other hand, have a much more focused mandate. Instead of adopting the approach of a security professional, you adopt the mindset of an attacker. You don't need to evaluate the effectiveness of each one of these security controls. You simply need to find either one flaw in the existing controls or one scenario that was overlooked in planning those controls. In this example, a penetration tester might enter the store during business hours and conduct reconnaissance, gathering information about the security controls that are in place and the locations of critical merchandise. They might notice that, though the burglar alarm is tied to the doors, it does not include any sensors on the windows. The tester might then return in the middle of the night, smash a window, and grab valuable merchandise. Recognizing that the store has security cameras in place, the attacker might wear a mask and park a vehicle outside of the range of the cameras. That's the hacker mindset. You need to think like a criminal. There's an important corollary to the hacker mindset that is important for both attackers and defenders to keep in mind. When conducting a penetration test (or a real-world attack), the attacker needs to win only once. They might attempt hundreds or thousands of potential attacks against a target. The fact that an organization's security defenses block 99.99 percent of those attacks is irrelevant if one of the attacks succeeds. Cybersecurity professionals need to win every time; attackers need to win only once. Reasons for Penetration Testing The modern organization dedicates extensive time, energy, and funding to a wide variety of security controls and activities. We install firewalls, intrusion prevention systems, security information and event management systems, vulnerability scanners, and many other tools. We equip and staff 24-hour security operations centers (SOCs) to monitor those technologies and watch our systems, networks, and applications for signs of compromise. There's more than enough work to completely fill our days twice over. Why on earth would we want to take on the additional burden of performing penetration tests? After all, they are time-consuming to perform internally and expensive to outsource. The answer to this question is that penetration testing provides us with visibility into the organization's security posture that simply isn't available by other means. Penetration testing does not seek to replace all the other cybersecurity activities of the organization. Instead, it complements and builds on those efforts. Penetration testers bring their unique skills and perspectives to the table and can take the output of security tools and place them within the attacker's mindset, asking the question, “If I were an attacker, how could I use this information to my advantage?” Benefits of Penetration Testing We've already discussed how a penetration tester carries out their work at a high level, and the remainder of this book is dedicated to exploring penetration testing tools and techniques in great detail. Before we dive into that exploration, let's take a moment to consider why we conduct penetration testing. What benefits does it bring to the organization? First and foremost, penetration testing provides us with knowledge that we can't obtain elsewhere. By conducting thorough penetration tests, we learn whether an attacker with the same knowledge, skills, and information as our testers would likely be able to penetrate our defenses. If they can't gain a foothold, we can then be reasonably confident that our networks are secure against attack by an equivalently talented attacker under the present circumstances. Second, in the event that attackers are successful, penetration testing provides us with an important blueprint for remediation. Cybersecurity professionals can trace the actions of the testers as they progressed through the different stages of the attack and close the series of open doors that the testers passed through. This provides us with a more robust defense against future attacks. Finally, penetration tests can provide us with essential, focused information on specific attack targets. We might conduct a penetration test prior to the deployment of a new system that is specifically focused on exercising the security features of that new environment. Unlike the broad nature of an open-ended penetration test, these focused tests can drill into the defenses around a specific target and provide actionable insight that can prevent a vulnerability from initial exposure. Threat Hunting The discipline of threat hunting is closely related to penetration testing but has a separate and distinct purpose. As with penetration testing, cybersecurity professionals engaged in threat hunting seek to adopt the attacker's mindset and imagine how hackers might seek to defeat an organization's security controls. The two disciplines diverge in what they accomplish with this information. Although penetration testers seek to evaluate the organization's security controls by testing them in the same manner as an attacker might, threat hunters use the attacker mindset to search the organization's technology infrastructure for the artifacts of a successful attack. They ask themselves what a hacker might do and what type of evidence they might leave behind and then go in search of that evidence. Threat hunting builds on a cybersecurity philosophy known as the “presumption of compromise.” This approach assumes that attackers have already successfully breached an organization and searches out the evidence of successful attacks. When threat hunters discover a potential compromise, they then kick into incident handling mode, seeking to contain, eradicate, and recover from the compromise. They also conduct a postmortem analysis of the factors that contributed to the compromise in an effort to remediate deficiencies. This post-event remediation is another similarity between penetration testing and threat hunting: organizations leverage the output of both processes in similar ways. Threat hunters work with a variety of intelligence sources, using the concept of intelligence fusion to combine information from threat feeds, security advisories and bulletins, and other sources. They then seek to trace the path that an attacker followed as they maneuver through a target network. Penetration Test Types There are four major categories of penetration testing: Physical penetration testing focuses on identifying and exploiting vulnerabilities in an organization's physical security controls. This can include breaking into buildings, bypassing access control systems, or compromising surveillance systems. The primary objective of physical penetration testing is to assess the effectiveness of an organization's physical security measures in preventing unauthorized access to their facilities, equipment, and sensitive information. Offensive penetration testing is a proactive approach where security professionals act as attackers to identify and exploit vulnerabilities in an organization's networks, systems, and applications. The goal of offensive penetration testing is to simulate real-world cyberattacks and determine how well an organization can detect, respond to, and recover from these attacks. Defensive penetration testing focuses on evaluating an organization's ability to defend against cyberattacks. Unlike offensive penetration testing, which aims to exploit vulnerabilities, defensive penetration testing involves assessing the effectiveness of security policies, procedures, and technologies in detecting and mitigating threats. Integrated penetration testing combines aspects of both offensive and defensive testing to provide a comprehensive assessment of an organization's security posture. This approach involves close collaboration between offensive and defensive experts to identify vulnerabilities, simulate attacks, and evaluate the effectiveness of defensive measures. Once the type of assessment is known, one of the first things to decide about a penetration test is how much knowledge testers will have about the environment. Three typical classifications are used to describe this: Known environment tests are tests performed with full knowledge of the underlying technology, configurations, and settings that make up the target. Testers will typically have such information as network diagrams, lists of systems and IP network ranges, and even credentials to the systems they are testing. Known environment tests allow for effective testing of systems without requiring testers to spend time identifying targets and determining which may be a way in. This means that a known environment test is often more complete, since testers can get to every system, service, or other target that is in scope, and will have credentials and other materials that will allow them to be tested. Of course, since testers can see everything inside an environment, they may not provide an accurate view of what an external attacker would see, and controls that would have been effective against most attackers may be bypassed. Unknown environment tests are intended to replicate what an attacker would encounter. Testers are not provided with access to or information about an environment, and instead, they must gather information, discover vulnerabilities, and make their way through an infrastructure or systems like an attacker would. This approach can be time-consuming, but it can help provide a reasonably accurate assessment of how secure the target is against an attacker of similar or lesser skill. It is important to note that the quality and skillset of your penetration tester or team is very important when conducting an unknown environment penetration test—if the threat actor you expect to target your organization is more capable, a tester can't provide you with a realistic view of what they could do. Partially known environment tests are a blend of known and unknown environment testing. A partially known environment test may provide some information about the environment to the penetration testers without giving full access, credentials, or configuration details. A partially known environment test can help focus penetration testers’ time and effort while also providing a more accurate view of what an attacker would actually encounter. Exam Note Be sure that you can differentiate and explain the four major categories and three classification types of penetration testing. The categories are physical penetration testing, offensive penetration testing, defensive penetration testing, and integrated penetration testing. The classification types are known environment tests, unknown environment tests, and partially known environment tests. You should be able to read a scenario describing a test and identify the type(s) of test being discussed. Rules of Engagement Once you have determined the type of assessment and the level of knowledge testers will have about the target, the rest of the rules of engagement (RoE) can be written. Key elements include the following: The timeline for the engagement and when testing can be conducted. Some assessments will intentionally be scheduled for noncritical timeframes to minimize the impact of potential service outages, whereas others may be scheduled during normal business hours to help test the organization's reaction to attacks. What locations, systems, applications, or other potential targets are included or excluded. This also often includes discussions about third-party service providers that may be impacted by the test, such as Internet services providers, software-as-a- service (SaaS) or other cloud service providers, or outsourced security monitoring services. Any special technical constraints should also be discussed in the RoE. Data handling requirements for information gathered during the penetration test. This is particularly important when engagements cover sensitive organizational data or systems. Requirements for handling often include confidentiality requirements for the findings, such as encrypting data during and after the test, and contractual requirements for disposing of the penetration test data and results after the engagement is over. What behaviors to expect from the target. Defensive behaviors like shunning, deny listing, or other active defenses may limit the value of a penetration test. If the test is meant to evaluate defenses, this may be useful. If the test is meant to test a complete infrastructure, shunning or blocking the penetration testing team's efforts can waste time and resources. What resources are committed to the test. In known environment and partially known environment testing scenarios, time commitments from the administrators, developers, and other experts on the targets of the test are not only useful, they can be necessary for an effective test. Legal concerns should also be addressed, including a review of the laws that cover the target organization, any remote locations, and any service providers who will be in scope. When and how communications will occur. Should the engagement include daily or weekly updates regardless of progress, or will the penetration testers simply report when they are done with their work? How should the testers respond if they discover evidence of a current compromise? Permission The tools and techniques we cover in this book are the bread and butter of a penetration tester's job, but they can also be illegal to use without permission. Before you plan (and especially before you execute) a penetration test, you should have appropriate permission. In most cases, you should be sure to have appropriate documentation for that permission in the form of a signed agreement, a memo from senior management, or a similar “get out of jail free” card from a person or people in the target organization with the rights to give you permission. Why is it called a “get out of jail free” card? It's the document that you would produce if something went wrong. Permission from the appropriate party can help you stay out of trouble if something goes wrong! Scoping agreements and the rules of engagement must define more than just what will be tested. In fact, documenting the limitations of the test can be just as important as what will be included. The testing agreement or scope documentation should contain disclaimers explaining that the test is valid only at the point in time that it is conducted, and that the scope and methodology chosen can impact the comprehensiveness of the test. After all, a known-environment penetration test is far more likely to find issues buried layers deep in a design than an unknown-environment test of well-secured systems! Problem handling and resolution is another key element of the rules of engagement. Although penetration testers and clients always hope that the tests will run smoothly and won't cause any disruption, testing systems and services, particularly in production environments using actual attack and exploit tools, can cause outages and other problems. In those cases, having a clearly defined communication, notification, and escalation path on both sides of the engagement can help minimize downtime and other issues for the target organization. Penetration testers should carefully document their responsibilities and limitations of liability, and ensure that clients know what could go wrong and that both sides agree on how it should be handled. That way, both the known and unknown impacts of the test can be addressed appropriately. Reconnaissance Penetration tests begin with a reconnaissance phase, where the testers seek to gather as much information as possible about the target organization. In a known-environment test, the testers enter the exercise with significant knowledge, but they still seek to supplement this knowledge with additional techniques. Passive reconnaissance techniques seek to gather information without directly engaging with the target. Chapter 2, “Cybersecurity Threat Landscape,” covered a variety of open source intelligence (OSINT) techniques that fit into the category of passive reconnaissance. Common passive reconnaissance techniques include performing lookups of domain information using DNS and WHOIS queries, performing web searches, reviewing public websites, and similar tactics. Active reconnaissance techniques directly engage the target in intelligence gathering. These techniques include the use of port scanning to identify open ports on systems, footprinting to identify the operating systems and applications in use, and vulnerability scanning to identify exploitable vulnerabilities. Exam Note Know the difference between passive and active reconnaissance techniques. Passive techniques do not directly engage the target, whereas active reconnaissance directly engages the target. One common goal of penetration testers is to identify wireless networks that may present a means of gaining access to an internal network of the target without gaining physical access to the facility. Testers use a technique called war driving, where they drive by facilities in a car equipped with high-end antennas and attempt to eavesdrop on or connect to wireless networks. Recently, testers have expanded this approach to the use of drones and unmanned aerial vehicles (UAVs) in a technique known as war flying. Running the Test During the penetration test, the testers follow the same process used by attackers. You'll learn more about this process in the discussion of the Cyber Kill Chain in Chapter 14, “Monitoring and Incident Response.” However, you should be familiar with some key phases of the test as you prepare for the exam: Initial access occurs when the attacker exploits a vulnerability to gain access to the organization's network. Privilege escalation uses hacking techniques to shift from the initial access gained by the attacker to more advanced privileges, such as root access on the same system. Pivoting, or lateral movement, occurs as the attacker uses the initial system compromise to gain access to other systems on the target network. Attackers establish persistence on compromised networks by installing backdoors and using other mechanisms that will allow them to regain access to the network, even if the initial vulnerability is patched. Penetration testers make use of many of the same tools used by real attackers as they perform their work. Exploitation frameworks, such as Metasploit, simplify the use of vulnerabilities by providing a modular approach to configuring and deploying vulnerability exploits. Cleaning Up At the conclusion of a penetration test, the testers conduct close-out activities that include presenting their results to management and cleaning up the traces of their work. Testers should remove any tools that they installed on systems as well as any persistence mechanisms that they put in place. The close-out report should provide the target with details on the vulnerabilities discovered during the test and advice on improving the organization's cybersecurity posture. Audits and Assessments The cornerstone maintenance activity for an information security team is their security assessment and testing program. This program includes tests, assessments, and audits that regularly verify that an organization has adequate security controls and that those security controls are functioning properly and effectively safeguarding information assets. In this section, you will learn about the three major components of a security assessment program: Security tests Security assessments Security audits Security Tests Security tests verify that a control is functioning properly. These tests include automated scans, tool-assisted penetration tests, and manual attempts to undermine security. Security testing should take place on a regular schedule, with attention paid to each of the key security controls protecting an organization. When scheduling security controls for review, information security managers should consider the following factors: Availability of security testing resources Criticality of the systems and applications protected by the tested controls Sensitivity of information contained on tested systems and applications Likelihood of a technical failure of the mechanism implementing the control Likelihood of a misconfiguration of the control that would jeopardize security Risk that the system will come under attack Rate of change of the control configuration Other changes in the technical environment that may affect the control performance Difficulty and time required to perform a control test Impact of the test on normal business operations After assessing each of these factors, security teams design and validate a comprehensive assessment and testing strategy. This strategy may include frequent automated tests supplemented by infrequent manual tests. For example, a credit card processing system may undergo automated vulnerability scanning on a nightly basis with immediate alerts to administrators when the scan detects a new vulnerability. The automated scan requires no work from administrators once it is configured, so it is easy to run quite frequently. The security team may wish to complement those automated scans with a manual penetration test performed by an external consultant for a significant fee. Those tests may occur on an annual basis to minimize costs and disruption to the business. Many security testing programs begin on a haphazard basis, with security professionals simply pointing their fancy new tools at whatever systems they come across first. Experimentation with new tools is fine, but security testing programs should be carefully designed and include rigorous, routine testing of systems using a risk-prioritized approach. Of course, it's not sufficient to simply perform security tests. Security professionals must also carefully review the results of those tests to ensure that each test was successful. In some cases, these reviews consist of manually reading the test output and verifying that the test completed successfully. Some tests require human interpretation and must be performed by trained analysts. Other reviews may be automated, performed by security testing tools that verify the successful completion of a test, log the results, and remain silent unless there is a significant finding. When the system detects an issue requiring administrator attention, it may trigger an alert, send an email or text message, or automatically open a trouble ticket, depending on the severity of the alert and the administrator's preference. Responsible Disclosure Programs Responsible disclosure programs allow security researchers to securely share information about vulnerabilities in a product with the vendor responsible for that product. The purpose of the program is to create a collaborative environment between the organization and the security community, allowing for the timely identification, reporting, and remediation of security vulnerabilities. These programs provide organizations with an opportunity to benefit from the wisdom and talent of cybersecurity professionals outside their own teams. Bug bounty programs are a type of responsible disclosure program that incentivizes responsible disclosure submissions by offering financial rewards (or “bounties”) to testers who successfully discover vulnerabilities. Supporters of bug bounty programs often point out that outsiders will probe your security whether you like it or not. Running a formal bug bounty program provides them with the incentive to let you know when they discover security issues. Security Assessments Security assessments are comprehensive reviews of the security of a system, application, or other tested environment. During a security assessment, a trained information security professional performs a risk assessment that identifies vulnerabilities in the tested environment that may allow a compromise and makes recommendations for remediation, as needed. Security assessments normally include the use of security testing tools but go beyond automated scanning and manual penetration tests. They also include a thoughtful review of the threat environment, current and future risks, and the value of the targeted environment. The main work product of a security assessment is normally an assessment report addressed to management that contains the results of the assessment in nontechnical language and concludes with specific recommendations for improving the security of the tested environment. Assessments may be conducted by an internal team, or they may be outsourced to a third-party assessment team with specific expertise in the areas being assessed. Security Audits Security audits use many of the same techniques followed during security assessments but must be performed by independent auditors. While an organization's security staff may routinely perform security tests and assessments, this is not the case for audits. Assessment and testing results are meant for internal use only and are designed to evaluate controls with an eye toward finding potential improvements. Audits, on the other hand, are formal examinations performed with the purpose of demonstrating the effectiveness of controls to a third party. The staff who design, implement, and monitor controls for an organization have an inherent conflict of interest when evaluating the effectiveness of those controls. Auditors provide an impartial, unbiased view of the state of security controls. They write reports that are quite similar to security assessment reports, but those reports are intended for different audiences that may include an organization's board of directors, government regulators, and other third parties. One of the primary outcomes of an audit is an attestation by the auditor. This is a formal statement that the auditors have reviewed the controls and found that they are both adequate to meet the control objectives and working properly. There are three main types of audits: internal audits, external audits, and third-party audits. Internal Audits Internal audits are performed by an organization's internal audit staff and are typically intended for internal audiences. The internal audit staff performing these audits normally have a reporting line that is completely independent of the functions they evaluate. In many organizations, the chief audit executive reports directly to the president, chief executive officer (CEO), or similar role. The chief audit executive (CAE) may also have reporting responsibility directly to the organization's governing board and/or the audit committee of that board. Internal audits may be conducted for a variety of reasons. Often, management or the board would like to obtain reassurance that the organization is meeting its compliance obligations. In addition, the internal audit team may lead a series of self-assessments designed to identify control gaps in advance of a more formal external audit. External Audits External audits are performed by an outside auditing firm who serves as an independent third party. These audits have a high degree of external validity because the auditors performing the assessment theoretically have no conflict of interest with the organization itself. There are thousands of firms who perform external audits, but most people place the highest credibility with the so-called Big Four audit firms: Ernst & Young Deloitte PricewaterhouseCoopers (PwC) KPMG Audits performed by these firms are generally considered acceptable by most investors and governing body members. Independent Third-Party Audits Independent third-party audits are conducted by, or on behalf of, another organization. For example, a regulatory body might have the authority to initiate an audit of a regulated firm under contract or law. In the case of an independent third-party audit, the organization initiating the audit generally selects the auditors and designs the scope of the audit. Exam Note Independent third-party audits are a subcategory of external audits—the only difference is who is requesting the audit. For an external audit, the request comes from the organization or its governing body. For an independent third-party audit, the request comes from a regulator, customer, or other outside entity. Organizations that provide services to other organizations are frequently asked to participate in independent third-party audits. This can be quite a burden on the audited organization if they have a large number of clients. The American Institute of Certified Public Accountants (AICPA) released a standard designed to alleviate this burden. The Statement on Standards for Attestation Engagements document 18 (SSAE 18), titled Reporting on Controls, provides a common standard to be used by auditors performing assessments of service organizations with the intent of allowing the organization to conduct an external assessment instead of multiple third-party assessments and then sharing the resulting report with customers and potential customers. SSAE 18 engagements are commonly referred to as service organization controls (SOC) audits. Auditing Standards When conducting an audit or assessment, the team performing the review should be clear about the standard that they are using to assess the organization. The standard provides the description of control objectives that should be met, and then the audit or assessment is designed to ensure that the organization properly implemented controls to meet those objectives. One common framework for conducting audits and assessments is the Control Objectives for Information and related Technologies (COBIT). COBIT describes the common requirements that organizations should have in place surrounding their information systems. The COBIT framework is maintained by the Information Systems Audit and Control Association (ISACA), the creators of the Certified Information Systems Auditor (CISA), and Certified Information Security Manager (CISM) certifications. The International Organization for Standardization (ISO) also publishes a set of standards related to information security. ISO 27001 describes a standard approach for setting up an information security management system, while ISO 27002 goes into more detail on the specifics of information security controls. These internationally recognized standards are widely used within the security field, and organizations may choose to become officially certified as compliant with ISO 27001. Vulnerability Life Cycle Now that you've learned about many of the activities involved in vulnerability management, let's take a look at the entire vulnerability life cycle and how these pieces fit together, as shown in Figure 5.18. FIGURE 5.18 Vulnerability life cycle Vulnerability Identification During the first stage in the process, the organization becomes aware of a vulnerability that exists within their environment. This identification may come from many different sources, including: Vulnerability scans run by the organization or outside assessors Penetration tests of the organization's environment Reports from responsible disclosure or bug bounty programs Results of system and process audits Vulnerability Analysis After identifying a possible vulnerability in the organization's environment, cybersecurity professionals next perform an analysis of that report. This includes several core tasks: Confirming that the vulnerability exists and is not the result of a false positive report Prioritizing and categorizing the vulnerability using tools such as CVSS and CVE that provide an external assessment of the vulnerability Supplementing the external analysis of the vulnerability with organization specific details, such as the organization's exposure factor to the vulnerability, environmental variables, industry and organizational impact, and the organization's risk tolerance Vulnerability Response and Remediation The outcome of the vulnerability analysis should guide the organization to identify the vulnerabilities that are most in need of remediation. At this point, cybersecurity professionals should respond to the vulnerability in one or more of the following ways: Apply a patch or other corrective measure to correct the vulnerability. Use network segmentation to isolate the affected system so that the probability of an exploit becomes remote. Implement other compensating controls, such as application firewalls or intrusion prevention systems, to reduce the likelihood that an attempted exploit will be successful. Purchase insurance to transfer the financial risk of the vulnerability to an insurance provider. Grant an exception or exemption to the system as part of a formal risk acceptance strategy Validation of Remediation After completing a vulnerability remediation effort, cybersecurity professionals should perform a validation that the vulnerability is no longer present. This is typically done by rescanning the affected system and verifying that the vulnerability no longer appears in scan results. In the case of more serious vulnerabilities, internal or external auditors may perform this validation to provide independent assurance that the issue is resolved. Reporting The final stage in the vulnerability life cycle is reporting, which involves communicating the findings, actions taken, and lessons learned to relevant stakeholders within the organization. This step ensures that decision-makers are informed about the current state of the organization's security posture and the effectiveness of the vulnerability management program. Reporting may include: Summarizing the vulnerabilities identified, analyzed, and remediated, along with their initial severity and impact on the organization Providing details on the remediation actions taken, including patches applied, compensating controls implemented, and risk acceptance decisions made Highlighting any trends, patterns, or areas requiring further attention, such as recurring vulnerabilities or systems that are particularly susceptible to exploitation Offering recommendations for improvements in the vulnerability management process, security policies, or employee training programs based on the findings and experiences throughout the life cycle Regular and comprehensive reporting not only demonstrates the organization's commitment to cybersecurity but also enables continuous improvement by identifying potential gaps in the vulnerability management process and fostering a proactive approach to addressing security risks. Summary Security assessment and testing plays a crucial role in the ongoing management of a cybersecurity program. The techniques discussed in this chapter help cybersecurity professionals maintain effective security controls and stay abreast of changes in their environment that might alter their security posture. Vulnerability scanning identifies potential security issues in systems, applications, and devices, providing teams with the ability to remediate those issues before they are exploited by attackers. The vulnerabilities that may be detected during these scans include improper patch management, weak configurations, default accounts, and the use of insecure protocols and ciphers. Penetration testing puts security professionals in the role of attackers and asks them to conduct offensive operations against their targets in an effort to discover security issues. The results of penetration tests provide a roadmap for improving security controls. Exam Essentials Many vulnerabilities exist in modern computing environments. Cybersecurity professionals should remain aware of the risks posed by vulnerabilities both on- premises and in the cloud. Improper or weak patch management can be the source of many of these vulnerabilities, providing attackers with a path to exploit operating systems, applications, and firmware. Weak configuration settings that create vulnerabilities include open permissions, unsecured root accounts, errors, weak encryption settings, insecure protocol use, default settings, and open ports and services. When a scan detects a vulnerability that does not exist, the report is known as a false positive. When a scan does not detect a vulnerability that actually exists, the report is known as a false negative. Threat hunting discovers existing compromises. Threat hunting activities presume that an organization is already compromised and search for indicators of those compromises. Threat hunting efforts include the use of advisories, bulletins, and threat intelligence feeds in an intelligence fusion program. They search for signs that attackers gained initial access to a network and then conducted maneuver activities on that

Use Quizgecko on...
Browser
Browser