Security Assessments PDF
Document Details
Uploaded by PlentifulMonkey
null
null
null
Tags
Summary
This document discusses security assessments, tests, and audit strategies. It also covers the design and validation of security assessments.
Full Transcript
Security Assessments CHAPTER 18 This chapter presents the following: Test, assessment, and audit strategies Te...
Security Assessments CHAPTER 18 This chapter presents the following: Test, assessment, and audit strategies Testing technical security controls Conducting or facilitating security audits Trust, but verify. —Russian proverb You can hire the best people, develop sound policies and procedures, and deploy world- class technology in an effort to secure your information systems, but if you do not regu- larly assess the effectiveness of these measures, your organization will not be secure for long. Unfortunately, thousands of well-intentioned organizations have learned the truth of this statement the hard way, realizing only after a security breach has occurred that the state-of-the-art controls they put into place initially have become less effective over time. So, unless your organization is continuously assessing and improving its security posture, that posture will become ineffective over time. This chapter covers some of the most important elements of security assessments and testing. It is divided into three sections. We start by discussing assessment, test, and audit strategies, particularly how to design and assess them. From there, we get into the nitty- gritty of various common forms of testing with which you should be familiar. The third and final section discusses the various kinds of formal security audits and how you can conduct or facilitate them. Test, Assessment, and Audit Strategies Let’s start by establishing some helpful definitions in the context of information sys- tems security. A test is a procedure that records some set of properties or behaviors in a system being tested and compares them against predetermined standards. If you install a new device on your network, you might want to test its attack surface by running a network scanner against it, recording the open ports, and then comparing them against the appropriate security standards used in your organization. An assessment is a series of planned tests that are somehow related to each other. For example, we could conduct a vulnerability assessment against a new software system to determine how secure it is. 813 CISSP All-in-One Exam Guide 814 This assessment would include some specific (and hopefully relevant) vulnerability tests together with static and dynamic analysis of its software. An audit is a systematic assess- ment of significant importance to the organization that determines whether the system or process being audited satisfies some external standards. By “external” we mean that the organization being audited did not author the standards all by itself. EXAM TIP You don’t have to memorize these definitions. They are presented simply to give you an idea of the different scopes. Many security professionals use the terms almost interchangeably. Each of these three types of system evaluations plays an important role in ensuring the security of our organizations. Our job as cybersecurity leaders is to integrate them into holistic strategies that, when properly executed, give us a complete and accurate picture of our security posture. It all starts with the risk management concepts we discussed in Chapter 2. Remember that risks determine which security controls we use, which are the focus of any tests, assessments, or audits we perform. So, a good security assessment strategy verifies that we are sufficiently protected against the risks we’re tracking. The security assessment strategy guides the development of standard testing and assessment procedures. This standardization is important because it ensures these activities are done in a consistent, repeatable, and cost-effective manner. We’ll cover testing procedures later in this chapter, so let’s take a look at how to design and validate a security assessment. Designing an Assessment The first step in designing anything is to figure out what it is that we are trying to accomplish. Are we getting ready for an external audit? Did we suffer a security incident because a control was not properly implemented? The answers to these sample questions point to significantly different types of assessment. In the first case, the assessment would be a very broad effort to verify an external standard with which we are supposed to be compliant. In the second case, the assessment would be focused on a specific control, so it would be much narrower in scope. Let’s elaborate on the second case as we develop a notional assessment plan. Suppose the security incident happened because someone clicked a link on a phishing e-mail and downloaded malware that the endpoint detection and response (EDR) solution was able to block. The EDR solution worked fine, but we are now concerned about our e-mail security controls. The objective of our assessment, therefore, is to determine the effectiveness of our e-mail defenses. Once we have the objective identified, we can determine the necessary scope to accomplish it. When we talk about the scope of an assessment, we really mean which specific controls we will test. In our example, we would probably want to look at our e-mail security gateway, but we should also look at our staff members’ security awareness. Next, we have to decide how many e-mail messages and how many users we need to assess to have confidence in our results. Chapter 18: Security Assessments 815 The scope, in turn, informs the methods we use. We’ll cover some of the testing techniques shortly, but it’s important to note that it’s not just what we do but how we do it that matters. We should develop standardized methodologies so that different tests are consistent and comparable. Otherwise, we won’t necessarily know whether our posture is deteriorating from one assessment to the next. Another reason to standardize the methodologies is to ensure we take into account business and operational impacts. For example, if we decide to test the e-mail security gateway using a penetration test, then there is a chance that we will interfere with e-mail service availability. This could present operational risks that we need to mitigate. Furthermore, on the off-chance that we break something during our assessment, we need to have a contingency plan that allows for the quick restoration of all services and capabilities. A key decision is whether the assessment will be performed by an internal team or by a third party. If you don’t have the in-house expertise, then this decision may very well already have been made for you. But even if your team has this expertise, you may still choose to bring in external auditors for any of a variety of reasons. For example, there may be a regulatory requirement that an external party test your systems; or you may want to benchmark your own internal assets against an external team; or perhaps your own team of testers is not large enough to cover all the auditing requirements and thus you want to bring in outside help. Finally, once the assessment plan is complete, we need to get it approved for execution. This doesn’t just involve our direct bosses but should also involve any stakeholders that might be affected (especially if something goes wrong and services are interrupted). The approval is not just for the plan itself, but also for any needed resources (e.g., funding for an external assessor) as well as for the scheduling. There is nothing worse than to schedule a security assessment that we later find out coincides with a big event like end- of-month accounting and reporting. Validating an Assessment Once the tests are over and the interpretation and prioritization are done, management will have in its hands a compilation of many of the ways the organization could be suc- PART VI cessfully attacked. This is the input to the next cycle in the remediation strategy. Every organization has only so much money, time, and personnel to commit to defending its network, and thus can mitigate only so much of the total risk. After balancing the risks and risk appetite of the organization and the costs of possible mitigations and the value gained from each, management must direct the system and security administrators as to where to spend those limited resources. An oversight program is required to ensure that the mitigations work as expected and that the estimated cost of each mitigation action is closely tracked by the actual cost of implementation. Any time the cost rises significantly or the value is found to be far below what was expected, the process should be briefly paused and reevaluated. It may be that a risk-versus-cost option initially considered less desirable now makes more sense than continuing with the chosen path. Finally, when all is well and the mitigations are underway, everyone can breathe easier…except the security engineer who has the task of monitoring vulnerability CISSP All-in-One Exam Guide 816 announcements and discussion mailing lists, as well as the early warning services offered by some vendors. To put it another way, the risk environment keeps changing. Between tests, monitoring may make the organization aware of newly discovered vulnerabilities that would be found the next time the test is run but that are too high risk to allow to wait that long. And so another, smaller cycle of mitigation decisions and actions must be taken, and then it is time to run the tests again. Table 18-1 provides an example of a testing schedule that each operations and security department should develop and carry out. Test Type Frequency Benefits Network Continuously to quarterly Enumerates the network structure and scanning determines the set of active hosts and associated software Identifies unauthorized hosts connected to a network Identifies open ports Identifies unauthorized services Log reviews Daily for critical systems Validates that the system is operating according to policy Password Continuously to same Verifies the policy is effective in producing cracking frequency as expiration policy passwords that are difficult to break Verifies that users select passwords compliant with the organization’s security policy Vulnerability Quarterly or bimonthly (more Enumerates the network structure and scanning often for high-risk systems), determines the set of active hosts and or whenever the vulnerability associated software database is updated Identifies a target set of computers to focus vulnerability analysis Identifies potential vulnerabilities on the target set Validates operating systems and major applications are up to date with security patches and software versions Penetration Annually Determines how vulnerable an testing organization’s network is to penetration and the level of damage that can be incurred Tests the IT staff’s response to perceived security incidents and their knowledge and implementation of the organization’s security policy and the system’s security requirements Integrity Monthly and in case of a Detects unauthorized file modifications checkers suspicious event Table 18-1 Example Testing Schedules for Each Operations and Security Department Chapter 18: Security Assessments 817 Testing Technical Controls A technical control is a security control implemented through the use of an IT asset. This asset is usually, but not always, some sort of software or hardware that is configured in a particular way. When we test our technical controls, we are verifying their ability to mitigate the risks that we identified in our risk management process (see Chapter 2 for a detailed discussion). This linkage between controls and the risks they are meant to miti- gate is important because we need to understand the context in which specific controls were implemented. Once we understand what a technical control was intended to accomplish, we are able to select the proper means of testing whether it is being effective. We may be better off testing third-party software for vulnerabilities than attempting a code review. As security professionals, we must be familiar, and ideally experienced, with the most common approaches to testing technical controls so that we are able to select the right one for the job at hand. Vulnerability Testing Vulnerability testing, whether manual, automated, or—preferably—a combination of both, requires staff and/or consultants with a deep security background and the highest level of trustworthiness. Even the best automated vulnerability scanning tool will pro- duce output that can be misinterpreted as crying wolf (false positive) when there is only a small puppy in the room, or alert you to something that is indeed a vulnerability but that either does not matter to your environment or is adequately compensated for elsewhere. There may also be two individual vulnerabilities that exist, which by themselves are not very important but when put together are critical. And, of course, false negatives will also crop up, such as an obscure element of a single vulnerability that matters greatly to your environment but is not called out by the tool. NOTE Before carrying out vulnerability testing, a written agreement from management is required! This protects the tester against prosecution for doing his job and ensures there are no misunderstandings by providing in PART VI writing what the tester should—and should not—do. The goals of the assessment are to Evaluate the true security posture of an environment (don’t cry wolf, as discussed earlier). Identify as many vulnerabilities as possible, with honest evaluations and prioritizations of each. Test how systems react to certain circumstances and attacks, to learn not only what the known vulnerabilities are (such as this version of the database, that version of the operating system, or a user ID with no password set) but also how the unique elements of the environment might be abused (SQL injection attacks, buffer overflows, and process design flaws that facilitate social engineering). CISSP All-in-One Exam Guide 818 Before the scope of the test is decided and agreed upon, the tester must explain the testing ramifications. Vulnerable systems could be knocked offline by some of the tests, and production could be negatively affected by the loads the tests place on the systems. Management must understand that results from the test are just a “snapshot in time.” As the environment changes, new vulnerabilities can arise. Management should also understand that various types of assessments are possible, each one able to expose different kinds of vulnerabilities in the environment, and each one limited in the completeness of results it can offer: Personnel testing includes reviewing employee tasks and thus identifying vulnerabilities in the standard practices and procedures that employees are instructed to follow, demonstrating social engineering attacks and the value of training users to detect and resist such attacks, and reviewing employee policies and procedures to ensure those security risks that cannot be reduced through physical and logical controls are met with the final control category: administrative. Physical testing includes reviewing facility and perimeter protection mechanisms. For instance, do the doors actually close automatically, and does an alarm sound if a door is held open too long? Are the interior protection mechanisms of server rooms, wiring closets, sensitive systems, and assets appropriate? (For example, is the badge reader working, and does it really limit access to only authorized personnel?) Is dumpster diving a threat? (In other words, is sensitive information being discarded without proper destruction?) And what about protection mechanisms for manmade, natural, or technical threats? Is there a fire suppression system? Does it work, and is it safe for the people and the equipment in the building? Are sensitive electronics kept above raised floors so they survive a minor flood? And so on. System and network testing are perhaps what most people think of when discussing information security vulnerability testing. For efficiency, an automated scanning product identifies known system vulnerabilities, and some may (if management has signed off on the performance impact and the risk of disruption) attempt to exploit vulnerabilities. Because a security assessment is a point-in-time snapshot of the state of an environment, assessments should be performed regularly. Lower-priority, better-protected, and less-at- risk parts of the environment may be scanned once or twice a year. High-priority, more vulnerable targets, such as e-commerce web server complexes and the middleware just behind them, should be scanned nearly continuously. To the degree automated tools are used, more than one tool—or a different tool on consecutive tests—should be used. No single tool knows or finds every known vulnerability. The vendors of different scanning tools update their tools’ vulnerability databases at different rates, and may add particular vulnerabilities in different orders. Always update the vulnerability database of each tool just before the tool is used. Similarly, from time to time different experts should run the test and/or interpret the results. No single expert always sees everything there is to be seen in the results. Chapter 18: Security Assessments 819 Most networks consist of many heterogeneous devices, each of which will likely have its own set of potential vulnerabilities, as shown in Figure 18-1. The potential issues we would seek in, say, the perimeter router (“1.” in Figure 18-1) are very different than those in a wireless access point (WAP) (“7.” in Figure 18-1) or a back-end database management server (DBMS) (“11.” in Figure 18-1). Vulnerabilities in each of these devices, in turn, will depend on the specific hardware, software, and configurations in use. Even if you were able to find an individual or tool who had expert knowledge on the myriad of devices and device-specific security issues, that person or tool would come with its own inherent biases. It is best to leverage team/tool heterogeneity in order to improve the odds of covering blind spots. Other Vulnerability Types As noted earlier, vulnerability scans find the potential vulnerabilities. Penetration testing is required to identify those vulnerabilities that can actually be exploited in the environ- ment and cause damage. Commonly exploited vulnerabilities include the following: Kernel flaws These are problems that occur below the level of the user interface, deep inside the operating system. Any flaw in the kernel that can be reached by an attacker, if exploitable, gives the attacker the most powerful level of control over the system. Countermeasure: Ensure that security patches to operating systems—after suf- ficient testing—are promptly deployed in the environment to keep the window of vulnerability as small as possible. Buffer overflows Poor programming practices, or sometimes bugs in libraries, allow more input than the program has allocated space to store it. This overwrites data or program memory after the end of the allocated buffer, and sometimes allows the attacker to inject program code and then cause the processor to execute it. This gives the attacker the same level of access as that held by the program that was attacked. If the program was run as an administrative user or by the system itself, this can mean complete access to the system. PART VI Countermeasure: Good programming practices and developer education, auto- mated source code scanners, enhanced programming libraries, and strongly typed languages that disallow buffer overflows are all ways of reducing this extremely common vulnerability. Symbolic links Though the attacker may be properly blocked from seeing or changing the content of sensitive system files and data, if a program follows a symbolic link (a stub file that redirects the access to another place) and the attacker can compromise the symbolic link, then the attacker may be able to gain unauthorized access. (Symbolic links are used in Unix and Linux systems.) This may allow the attacker to damage important data and/or gain privileged access to the system. A historical example of this was to use a symbolic link to cause a 820 CISSP All-in-One Exam Guide Unencrypted protocols Ineffective with encrypted traffic Unauthorized WAPs Full data capture not performed Terminates on Unpatched and vulnerable services Default installations internal network Default SNMP RW strings 3. Network not segmented 7. 9. Access lists not applied Unencrypted management protocols Excessive rules (in/out) See 9. 2. 5. 8. 10. 1. 6. See 9. OS not patched and vulnerable Access lists not applied Excessive rules (in/out) Applications not patched and vulnerable Unencrypted management protocols Unpatched and vulnerable Unnecessary and vulnerable services FW and OS Users running as local admin Personal firewalls not used 4. 11. OS, DBMS, and application servers not OS and DBMS not patched and vulnerable patched and vulnerable Unnecessary and vulnerable services Unnecessary and vulnerable services Poorly configured services Weak certificate management Outdated and vulnerable applications Weak session management Default and easily guessed passwords Cleartext passwords Excessive directory and file permissions Application input not effectively validated Unencrypted or weak protocols Logging and monitoring ineffective Figure 18-1 Vulnerabilities in heterogeneous networks Chapter 18: Security Assessments 821 program to delete a password database, or replace a line in the password database with characters that, in essence, created a password-less root-equivalent account. Countermeasure: Programs, and especially scripts, must be written to ensure that the full path to the file cannot be circumvented. File descriptor attacks File descriptors are numbers many operating systems use to represent open files in a process. Certain file descriptor numbers are universal, meaning the same thing to all programs. If a program makes unsafe use of a file descriptor, an attacker may be able to cause unexpected input to be provided to the program, or cause output to go to an unexpected place with the privileges of the executing program. Countermeasure: Good programming practices and developer education, auto- mated source code scanners, and application security testing are all ways of reduc- ing this type of vulnerability. Race conditions Race conditions exist when the design of a program puts it in a vulnerable condition before ensuring that those vulnerable conditions are mitigated. Examples include opening temporary files without first ensuring the files cannot be read or written to by unauthorized users or processes, and running in privileged mode or instantiating dynamic link library (DLL) functions without first verifying that the DLL path is secure. Either of these may allow an attacker to cause the program (with its elevated privileges) to read or write unexpected data or to perform unauthorized commands. An example of a race condition is a time-of-check to time-of-use (TOC/TOU) attack, discussed in Chapter 2. Countermeasure: Good programming practices and developer education, auto- mated source code scanners, and application security testing are all ways of reduc- ing this type of vulnerability. File and directory permissions Many of the previously described attacks rely on inappropriate file or directory permissions—that is, an error in the access control of some part of the system, on which a more secure part of the system depends. Also, if a system administrator makes a mistake that results in decreasing the security of the permissions on a critical file, such as making PART VI a password database accessible to regular users, an attacker can take advantage of this to add an unauthorized user to the password database or an untrusted directory to the DLL search path. Countermeasure: File integrity checkers, which should also check expected file and directory permissions, can detect such problems in a timely fashion, hopefully before an attacker notices and exploits them. Many, many types of vulnerabilities exist, and we have covered some, but certainly not all, here in this book. The previous list includes only a few specific vulnerabilities you should be aware of for exam purposes. CISSP All-in-One Exam Guide 822 Vulnerability Scanning Recap Vulnerability scanners provide the following capabilities: The identification of active hosts on the network The identification of active and vulnerable services (ports) on hosts The identification of operating systems The identification of vulnerabilities associated with discovered operating systems and applications The identification of misconfigured settings Test for compliance with host applications’ usage/security policies The establishment of a foundation for penetration testing Penetration Testing Penetration testing (also known as pen testing) is the process of simulating attacks on a network and its systems at the request of the owner, senior management. Penetration testing uses a set of procedures and tools designed to test and possibly bypass the security controls of a system. Its goal is to measure an organization’s level of resistance to an attack and to uncover any exploitable weaknesses within the environment. Organizations need to determine the effectiveness of their security measures and not just trust the promises of the security vendors. Good computer security is based on reality, not on some lofty goals of how things are supposed to work. A penetration test emulates the same methods attackers would use. Attackers can be clever, creative, and resourceful in their techniques, so penetration test attacks should align with the newest hacking techniques along with strong foundational testing methods. The test should look at each and every computer in the environment, as shown in Figure 18-2, because an attacker will not necessarily scan one or two computers only and call it a day. The type of penetration test that should be used depends on the organization, its security objectives, and the management’s goals. Some organizations perform periodic penetration tests on themselves using different types of tools. Other organizations ask a third party to perform the vulnerability and penetration tests to provide a more objective view. Penetration tests can evaluate web servers, Domain Name System (DNS) servers, router configurations, workstation vulnerabilities, access to sensitive information, open ports, and available services’ properties that a real attacker might use to compromise the organization’s overall security. Some tests can be quite intrusive and disruptive. The timeframe for the tests should be agreed upon so productivity is not affected and personnel can bring systems back online if necessary. Chapter 18: Security Assessments 823 Database Database Web Data Data server cluster Server Server Server Server Web server Firewalls Router Firewalls Server farm Penetration testing Host PART VI Internal network Figure 18-2 Penetration testing is used to prove an attacker can actually compromise systems. CISSP All-in-One Exam Guide 824 NOTE Penetration tests are not necessarily restricted to information technology, but may include physical security as well as personnel security. Ultimately, the purpose is to compromise one or more controls, which could be technical, physical, or administrative. The result of a penetration test is a report given to management that describes the vulnerabilities identified and the severity of those vulnerabilities, along with descriptions of how they were exploited by the testers. The report should also include suggestions on how to deal with the vulnerabilities properly. From there, it is up to management to determine how to address the vulnerabilities and what countermeasures to implement. It is critical that senior management be aware of any risks involved in performing a penetration test before it gives the authorization for one. In rare instances, a system or application may be taken down inadvertently using the tools and techniques employed during the test. As expected, the goal of penetration testing is to identify vulnerabilities, estimate the true protection the security mechanisms within the environment are providing, and see how suspicious activity is reported—but accidents can and do happen. Security professionals should obtain an authorization letter that includes the extent of the testing authorized, and this letter or memo should be available to members of the team during the testing activity. This type of letter is commonly referred to as a “Get Out of Jail Free Card.” Contact information for key personnel should also be available, along with a call tree in the event something does not go as planned and a system must be recovered. NOTE A “Get Out of Jail Free Card” is a document you can present to someone who thinks you are up to something malicious, when in fact you are carrying out an approved test. More than that, it’s also the legal agreement you have between you and your customer that protects you from liability, and prosecution. When performing a penetration test, the team goes through a five-step process: 1. Discovery Footprinting and gathering information about the target 2. Enumeration Performing port scans and resource identification methods 3. Vulnerability mapping Identifying vulnerabilities in identified systems and resources 4. Exploitation Attempting to gain unauthorized access by exploiting vulnerabilities Chapter 18: Security Assessments 825 5. Report to management Delivering to management documentation of test findings along with suggested countermeasures DNS WHOIS Goggle searches Client inputs Perimeter devices Identification Port System Identification of of targets scanning fingerprinting vulnerabilities Operating systems Mandate NO Services allows exploitation? Web YES applications Deeper network Nondestructive penetration; exploit exploitation of all possible vulnerabilities vulnerabilities Result collation and report writing The penetration testing team can have varying degrees of knowledge about the penetration target before the tests are actually carried out: Zero knowledge The team does not have any knowledge of the target and must start from ground zero. PART VI Partial knowledge The team has some information about the target. Full knowledge The team has intimate knowledge of the target. Security testing of an environment may take several forms, in the sense of the degree of knowledge the tester is permitted to have up front about the environment, and also the degree of knowledge the environment is permitted to have up front about the tester. Tests can be conducted externally (from a remote location) or internally (meaning the tester is within the network). Combining both can help better understand the full scope of threats from either domain (internal and external). Penetration tests may be blind, double-blind, or targeted. A blind test is one in which the testers only have publicly available data to work with (which is also known as zero knowledge or black box testing). The network security staff is aware that this type of test will take place, and will be on the lookout for pen tester activities. Part of the planning CISSP All-in-One Exam Guide 826 Vulnerability and Penetration Testing: What Color Is Your Box? Vulnerability testing and penetration testing come in boxes of at least three colors: black, white, and gray. The color, of course, is metaphorical, but security profession- als need to be aware of the three types. None is clearly superior to the others in all situations, so it is up to us to choose the right approach for our purposes. Black box testing treats the system being tested as completely opaque. This means that the tester has no a priori knowledge of the internal design or features of the system. All knowledge will come to the tester only through the assessment itself. This approach simulates an external attacker best and may yield insights into information leaks that can give an adversary better information on attack vectors. The disadvantage of black box testing is that it probably won’t cover all of the internal controls since some of them are unlikely to be discovered in the course of the audit. Another issue is that, with no knowledge of the innards of the system, the test team may inadvertently target a subsystem that is critical to daily operations. White box testing affords the pen tester complete knowledge of the inner workings of the system even before the first scan is performed. This approach allows the test team to target specific internal controls and features and should yield a more complete assessment of the system. The downside is that white box testing may not be representative of the behaviors of an external attacker, though it may be a more accurate depiction of an insider threat. Gray box testing meets somewhere between the other two approaches. Some, but not all, information on the internal workings is provided to the test team. This helps guide their tactics toward areas we want to have thoroughly tested, while also allowing for a degree of realism in terms of discovering other features of the system. This approach mitigates the issues with both white and black box testing. for this type of test involves determining what actions, if any, the defenders are allowed to take. Stopping every detected attack will slow down the pen testing team and may not show the depths they could’ve reached without forewarning to the staff. A double-blind test (stealth assessment) is also a blind test to the assessors, as mentioned previously, but in this case the network security staff is not notified. This enables the test to evaluate the network’s security level and the staff ’s responses, log monitoring, and escalation processes, and is a more realistic demonstration of the likely success or failure of an attack. Targeted tests can involve external consultants and internal staff carrying out focused tests on specific areas of interest. For example, before a new application is rolled out, the team might test it for vulnerabilities before installing it into production. Another Chapter 18: Security Assessments 827 example is to focus specifically on systems that carry out e-commerce transactions and not the other daily activities of the organization. Vulnerability Test vs. Penetration Test A vulnerability assessment identifies a wide range of vulnerabilities in the environ- ment. This is commonly carried out through a scanning tool. The idea is to identify any vulnerabilities that potentially could be used to compromise the security of our systems. By contrast, in a penetration test, the security professional exploits one or more vulnerabilities to prove to the customer (or your boss) that a hacker can actually gain access to the organization’s resources. It is important that the team start off with only basic user-level access to properly simulate different attacks. The team needs to utilize a variety of different tools and attack methods and look at all possible vulnerabilities because actual attackers only need to find one vulnerability that the defenders missed. Red Teaming While penetration testing is intended to uncover as many exploitable vulnerabilities as possible in the allotted time, it doesn’t really emulate threat actor behaviors all that well. Adversaries almost always have very specific objectives when they attack, so just because your organization passed its pen test doesn’t mean there isn’t a creative way for adversar- ies to get in. Red teaming is the practice of emulating a specific threat actor (or type of threat actor) with a particular set of objectives. Whereas pen testing answers the question “How many ways can I get in?” red teaming answers the question “How can I get in and accomplish this objective?” Red teaming more closely mimics the operational planning and attack processes of advanced threat actors that have the time, means, and motive to defeat even advanced defenses and remain undetected for long periods of time. A red team operation begins by determining the adversary to be emulated and a set of objectives. The red team then PART VI conducts reconnaissance (typically with a bit of insider help) to understand how the systems work and locate the team’s objectives. The next step is to draw up a plan on how to accomplish the objectives while remaining undetected. In the more elaborate cases, the red team may create a replica of the target environment inside a cyber range in which to perform mission rehearsals and ensure the team’s actions will not trigger any alerts. Finally, the red team launches the attack on the actual system and tries to reach its objectives. As you can imagine, red teaming as described here is very costly and beyond the reach of all but the most well-resourced organizations. Most often, what you’ll see is a hybrid approach that is more focused than pen testing but less intense than red teaming. We CISSP All-in-One Exam Guide 828 all have to do what we can with the resources we have. This is why many organizations (even very small ones) establish an internal red team that periodically comes together to think like an adversary would about some aspect of the business. It could be a new or critical information system, or a business process, or even a marketing campaign. Their job is to ask the question “If we were adversaries, how would we exploit this aspect of the business?” EXAM TIP Don’t worry about differentiating penetration testing and red teaming during the exam. If the term “red team” shows up on your test, it will most likely describe the group of people who conduct both penetration tests and red teaming. Breach Attack Simulations One of the problems with both pen testing and red teaming is that they amount to a point-in-time snapshot of the organizational defenses. Just because your organization did very well against the penetration testers last week doesn’t necessarily mean it would do well against a threat actor today. There are many reasons for this, but the two most important ones are that things change and the test (no matter how long it takes) is never 100 percent thorough. What would be helpful as a complement to human testing would be automated testing that could happen periodically or even continually. Breach and attack simulations (BAS) are automated systems that launch simulated attacks against a target environment and then generate reports on their findings. They are meant to be realistic but not cause any adverse effect to the target systems. For example, a ransomware simulation might use “defanged” malware that looks and propagates just like the real thing but, when successful, will only encrypt a sample file on a target host as a proof of concept. Its signature should be picked up by your network detection and response (NDR) or your endpoint detection and response (EDR) solutions. Its communications with a command-and-control (C2) system via the Internet will follow the same processes that the real thing would. In other words, each simulation is very realistic and meant to test your ability to detect and respond to it. BAS is typically offered as a Software as a Service (SaaS) solution. All the tools, automation, and reporting take place in the provider’s cloud. BAS agents can also be deployed in the target environment for better coverage under an assumed breach scenario. This covers the cases in which the adversary breached your environment using a zero-day exploit or some other mechanism that successfully evaded your defenses. In that case, you’re trying to determine how well your defense in depth is working. Log Reviews A log review is the examination of system log files to detect security events or to verify the effectiveness of security controls. Log reviews actually start way before the first event is examined by a security specialist. In order for event logs to provide meaningful information, Chapter 18: Security Assessments 829 they must capture a very specific but potentially large amount of information that is grounded on both industry best practices and the organization’s risk management process. There is no one-size-fits-all set of event types that will help you assess your security pos- ture. Instead, you need to constantly tune your systems in response to the ever-changing threat landscape. Another critical element when setting up effective log reviews for an organization is to ensure that time is standardized across all networked devices. If an incident affects three devices and their internal clocks are off by even a few seconds, then it will be significantly more difficult to determine the sequence of events and understand the overall flow of the attack. Although it is possible to normalize differing timestamps, it is an extra step that adds complexity to an already challenging process of understanding an adversary’s behavior on our networks. Standardizing and synchronizing time is not a difficult thing to do. The Network Time Protocol (NTP) version 4, described in RFC 5905, is the industry standard for synchronizing computer clocks between networked devices. Now that you have carefully defined the events you want to track and ensured all timestamps are synchronized across your network, you still need to determine where the events will be stored. By default, most log files are stored locally on the corresponding device. The challenge with this approach is that it makes it more difficult to correlate events across devices to a given incident. Additionally, it makes it easier for attackers to alter the log files of whatever devices they compromise. By centralizing the location of all log files across the organization, you address both issues and also make it easier to archive the logs for long-term retention. Efficient archiving is important because the size of these logs will likely be significant. In fact, unless your organization is extremely small, you will likely have to deal with thousands (or perhaps even millions) of events each day. Most of these are mundane and probably irrelevant, but we usually don’t know which events are important and which Network Time Protocol The Network Time Protocol (NTP) is one of the oldest protocols used on the Inter- net and is still in widespread use today. It was originally developed in the 1980s in PART VI part to solve the problem of synchronizing trans-Atlantic network communications. Its current version, 4, still leverages statistical analysis of round-trip delays between a client and one or more time servers. The time itself is sent in a UDP datagram that carries a 64-bit timestamp on port 123. Despite its client/server architecture, NTP employs a hierarchy of time sources organized into strata, with stratum 0 being the most authoritative. A network device on a lower stratum acts as a client to a server on a higher stratum, but could itself be a server to a node further downstream from it. Furthermore, nodes on the same stratum can and often do communicate with each other to improve the accuracy of their times. (Continued ) CISSP All-in-One Exam Guide 830 Stratum 0 consists of highly accurate time sources such as atomic clocks, global positioning system (GPS) clocks, or radio clocks. Stratum 1 consists of primary time sources, typically network appliances with highly accurate internal clocks that are connected directly to a stratum 0 source. Stratum 2 is where you would normally see your network servers, such as your local NTP servers and your domain controllers. Stratum 3 can be thought of as other servers and the client computers on your network, although the NTP standard does not define this stratum as such. Instead, the standard allows for a hierarchy of up to 16 strata wherein the only requirement is that each strata gets its time from the higher one and serves time to the lower strata if it has any. Government GPS Radio waves Cesium standard fountain Stratum 0 Stratum 1 Stratum 2 Stratum 3 aren’t until we’ve done some analysis. In many investigations, the seemingly unimportant events of days, weeks, or even months ago turn out to be the keys to understanding a security incident. So while retaining as much as possible is necessary, we need a way to quickly separate the wheat from the chaff. Chapter 18: Security Assessments 831 Preventing Log Tampering Log files are often among the first artifacts that attackers will use to attempt to hide their actions. Knowing this, it is up to us as security professionals to do what we can to make it infeasible, or at least very difficult, for attackers to successfully tamper with our log files. The following are the top five steps we can take to raise the bar for the bad folks: Remote logging When attackers compromise a device, they often gain sufficient privileges to modify or erase the log files on that device. Putting the log files on a separate box requires the attackers to target that box too, which at the very least buys you some time to notice the intrusion. Simplex communication Some high-security environments use one-way (or simplex) communications between the reporting devices and the central log repository. This is easily accomplished by severing the “receive” pairs on an Ethernet cable. The term data diode is sometimes used to refer to this approach to physically ensuring a one-way path. Replication It is never a good idea to keep a single copy of such an important resource as the consolidated log entries. By making multiple copies and keeping them in different locations, you make it harder for attackers to alter the log files, particularly if at least one of the locations is not accessible from the network (e.g., a removable device). Write-once media If one of the locations to which you back up your log files can be written to only once, you make it impossible for attackers to tamper with that copy of the data. Of course, they can still try to physically steal the media, but now you force them to move into the physical domain, which many attackers (particularly ones overseas) will not do. Cryptographic hash chaining A powerful technique for ensuring events that are modified or deleted are easily noticed is to use cryptographic hash chaining. In this technique, each event is appended the cryptographic hash PART VI (e.g., SHA-256) of the preceding event. This creates a chain that can attest to the completeness and the integrity of every event in it. Fortunately, many solutions, both commercial and free, now exist for analyzing and managing log files and other important event artifacts. Security information and event management (SIEM) systems enable the centralization, correlation, analysis, and retention of event data in order to generate automated alerts. Typically, a SIEM provides a dashboard interface that highlights possible security incidents. It is then up to the security specialists to investigate each alert and determine if further action is required. The challenge, of course, is ensuring that the number of false positives is kept fairly low and that the number of false negatives is kept even lower. CISSP All-in-One Exam Guide 832 Synthetic Transactions Many of our information systems operate on the basis of transactions. A user (typically a person) initiates a transaction that could be anything from a request for a given web page to a wire transfer of half a million dollars to an account in Switzerland. This transaction is processed by any number of other servers and results in whatever action the requestor wanted. This is considered a real transaction. Now suppose that a transaction is not gen- erated by a person but by a script. This is considered a synthetic transaction. The usefulness of synthetic transactions is that they allow us to systematically test the behavior and performance of critical services. Perhaps the simplest example is a scenario in which you want to ensure that your home page is up and running. Rather than waiting for an angry customer to send you an e-mail saying that your home page is unreachable, or spending a good chunk of your day visiting the page on your browser, you could write a script that periodically visits your home page and ensures that a certain string is returned. This script could then alert you as soon as the page is down or unreachable, allowing you to investigate before you would’ve otherwise noticed it. This could be an early indicator that your web server was hacked or that you are under a distributed denial-of-service (DDoS) attack. Synthetic transactions can do more than simply tell you whether a service is up or down. They can measure performance parameters such as response time, which could alert you to network congestion or server overutilization. They can also help you test new services by mimicking typical end-user behaviors to ensure the system works as it ought to. Finally, these transactions can be written to behave as malicious users by, for example, attempting a cross-site scripting (XSS) attack and ensuring your controls are effective. This is an effective way of testing software from the outside. Real User Monitoring vs. Synthetic Transactions Real user monitoring (RUM) is a passive way to monitor the interactions of real users with a web application or system. It uses agents to capture metrics such as delay, jitter, and errors from the user’s perspective. RUM differs from synthetic transactions in that it uses real people instead of scripted commands. While RUM more accurately captures the actual user experience, it tends to produce noisy data (e.g., incomplete transactions due to users changing their minds or losing mobile connectivity) and thus may require more back-end analysis. It also lacks the ele- ments of predictability and regularity, which could mean that a problem won’t be detected during low utilization periods. Synthetic transactions, on the other hand, are very predictable and can be very regular, because their behaviors are scripted. They can also detect rare occurrences more reliably than waiting for a user to actually trigger that behavior. Synthetic transactions also have the advantage of not having to wait for a user to become dissatisfied or encounter a problem, which makes them a more proactive approach. It is important to note that RUM and synthetic transactions are different ways of achieving the same goal. Neither approach is the better one in all cases, so it is common to see both employed contemporaneously. Chapter 18: Security Assessments 833 Code Reviews So far, all the security testing we have discussed looks at the behaviors of our systems. This means that we are only assessing the externally visible features without visibility into the inner workings of the system. If you want to test your own software system from the inside, you could use a code review, a systematic examination of the instructions that comprise a piece of software, performed by someone other than the author of that code. This approach is a hallmark of mature software development processes. In fact, in many organizations, developers are not allowed to push out their software modules until some- one else has signed off on them after doing a code review. Think of this as proofreading an important document before you send it to an important person. If you try to proof- read it yourself, you will probably not catch all those embarrassing typos and grammati- cal errors as easily as someone else could who is checking it for you. Code reviews go way beyond checking for typos, though that is certainly one element of it. It all starts with a set of coding standards developed by the organization that wrote the software. This could be an internal team, an outsourced developer, or a commercial vendor. Obviously, code reviews of commercial off-the-shelf (COTS) software are extremely rare unless the software is open source or you happen to be a major government agency. Still, each development shop will have a style guide or a set of documented coding standards that covers everything from how to indent the code to when and how to use existing code libraries. So a preliminary step to the code review is to ensure the author followed the team’s style guide or standards. In addition to helping the maintainability of the software, this step gives the code reviewer a preview of the magnitude of the work ahead; a sloppy coder will probably have a lot of other, harder-to-find defects in his code. After checking the structure and format of the code, the reviewer looks for uncalled or unneeded functions or procedures. These lead to “code bloat,” which makes it harder to maintain and secure the application. For this same reason, the reviewer looks for modules that are excessively complex and should be restructured or split into multiple routines. Finally, in terms of reducing complexity, the reviewer looks for blocks of repeated code that could be refactored. Even better, these could be pulled out and turned into external reusable components such as library functions. An extreme example of unnecessary (and dangerous) procedures are the code stubs PART VI and test routines that developers often include in their developmental software. There have been too many cases in which developers left test code (sometimes including hard- coded credentials) in final versions of software. Once adversaries discover this condition, exploiting the software and bypassing security controls is trivial. This problem is insidious, because developers sometimes comment out the code for final testing, just in case the tests fail and they have to come back and rework it. They may make a mental note to revisit the file and delete this dangerous code, but then forget to do so. While commented code is unavailable to an attacker after a program is compiled (unless they have access to the source code), the same is not true of the scripts that are often found in distributed applications. Defensive programming is a best practice that all software development operations should adopt. In a nutshell, it means that as you develop or review the code, you are constantly looking for opportunities for things to go badly. Perhaps the best example of defensive programming is the practice of treating all inputs, whether they come from a keyboard, a file, CISSP All-in-One Exam Guide 834 A Code Review Process 1. Identify the code to be reviewed (usually a specific function or file). 2. The team leader organizes the inspection and makes sure everyone has access to the correct version of the source code, along with all supporting artifacts. 3. Everyone on the team prepares for inspection by reading through the code and making notes. 4. A designated team member collates all the obvious errors offline (not in a meeting) so they don’t have to be discussed during the inspection meeting (which would be a waste of time). 5. If everyone agrees the code is ready for inspection, then the meeting goes ahead. 6. The team leader displays the code (with line numbers) via an overhead projector so everyone can read through it. Everyone discusses bugs, design issues, and anything else that comes up about the code. A scribe (not the author of the code) writes everything down. 7. At the end of the meeting, everyone agrees on a “disposition” for the code: Passed: Code is good to go Passed with rework: Code is good so long as small changes are fixed Reinspect: Fix problems and have another inspection 8. After the meeting, the author fixes any mistakes and checks in the new version. 9. If the disposition of the code in step 7 was passed with rework, the team leader checks off the bugs that the scribe wrote down and makes sure they’re all fixed. 10. If the disposition of the code in step 7 was reinspect, the team leader goes back to step 2 and starts over again. or the network, as untrusted until proven otherwise. This user input validation can be a bit trickier than it sounds, because you must understand the context surrounding the input. Are you expecting a numerical value? If so, what is the acceptable range for that value? Can this range change over time? These and many other questions need to be answered before we can decide whether the inputs are valid. Keep in mind that many of the oft-exploited vulnerabilities we see have a lack of input validation as their root cause. Code Testing We will discuss in Chapters 24 and 25 the multiple types of tests to which we must subject our code as part of the software development process. However, once the code comes out of development and before we put it into a production environment, we must Chapter 18: Security Assessments 835 ensure that it meets our security policies. Does it encrypt all data in transit? Is it pos- sible to bypass authentication or authorization controls? Does it store sensitive data in unencrypted temporary files? Does it reach out to any undocumented external resources (e.g., for library updates)? The list goes on, but the point is that security personnel are incentivized differently than software developers. The programmer gets paid to imple- ment features in software, while the security practitioner gets paid to keep systems secure. Most mature organizations have an established process to certify that software systems are secure enough to be installed and operated on their networks. There is typically a follow-on to that process, which is when a senior manager (hopefully after reading the results of the certification) authorizes (or accredits) the system. Misuse Case Testing Use cases are structured scenarios that are commonly used to describe required function- ality in an information system. Think of them as stories in which an external actor (e.g., a user) wants to accomplish a given goal on the system. The use case describes the sequence of interactions between the actor and the system that result in the desired outcome. Use cases are textual but are often summarized and graphically depicted using a Unified Modeling Language (UML) use case diagram such as the one shown in Figure 18-3. This figure illustrates a very simple view of a system in which a customer places online orders. According to the UML, actors such as our user are depicted using stick figures, and the actors’ use cases are depicted as verb phrases inside ovals. Use cases can be related to one another in a variety of ways, which we call associations. The most common ways in which use cases are associated are by including another use case (that is, the included use case is always executed when the preceding one is) or by extending a use case (meaning that the second use case may or may not be executed depending on a decision point in the main use case). In Figure 18-3, our customer attempts to place an order and may be prompted to log in if she hasn’t already done so, but she will always be asked to provide her credit card information. Figure 18-3 Log in PART VI UML use case diagram > tend> > Use 2-factor tend tend> in e clu cl ud Steal credit card de in >> info >