CISSP All-in-One Exam Guide Chapter 20 PDF

Summary

This chapter from the CISSP All-in-One Exam Guide details foundational security operations concepts, change management processes, configuration management, resource protection, patch and vulnerability management, physical security management, and personnel safety and security. It emphasizes managing security as a business function, not just incident response. The chapter also covers organizational roles, security operations (SecOps) integration, accountability, and the need-to-know/least privilege principles for securing resources.

Full Transcript

Managing Security Operations CHAPTER 20 This chapter presents the following: Foundational security operations concepts...

Managing Security Operations CHAPTER 20 This chapter presents the following: Foundational security operations concepts Change management processes Configuration management Resource protection Patch and vulnerability management Physical security management Personnel safety and security Management is keeping the trains running on time. —Andy Dunn Security operations is a broad field, but the image that comes to many of our minds when we hear the term is a security operations center (SOC) where analysts, threat hunters, and incident responders fight off cyberthreats day in and day out. That is, in fact, an important aspect of security operations, but it isn’t the complete scope. A lot of other work goes into ensuring our spaces are protected, our systems are optimized, and our people are doing the right things. This chapter covers many of the issues that we, as security leaders, must tackle to create a secure operational environment for our organiza- tions. Security operations is the business of managing security. It may not be as exciting as hunting down a threat actor in real time, but it is just as important. Foundational Security Operations Concepts Security operations revolves around people much more than around computers and net- works. A good chunk of our jobs as CISSPs is to lead security teams who prevent teams of attackers from causing us harm; our computers and networks are just the battlefields 885 CISSP All-in-One Exam Guide 886 on which these groups fight each other. Sometimes, it is our own teammates who can become the enemy, either deliberately or through carelessness. So, we can’t really manage security operations without first understanding the roles we need our teammates to fill and the ways in which we keep the people filling those roles honest. Table 20-1 shows some of the common IT and security roles within organizations and their corresponding job definitions. Each role needs to have a completed and well- defined job description. Security personnel should use these job descriptions when assigning access rights and permissions in order to ensure that individuals have access only to those resources needed to carry out their tasks. Table 20-1 contains just a few roles with a few tasks per role. Organizations should create a complete list of roles used within their environment, with each role’s associated tasks and responsibilities. This should then be used by data owners and security personnel when determining who should have access to specific resources and the type of access. A clear and unambiguous understanding of roles and responsibilities across the organization is critical to managing security. Without it, ensuring that everyone has the right access they need for their jobs, and no more, becomes very difficult. In the sections that follow we look at other foundational concepts we all need to be able to apply to our security operations. Organizational Role Core Responsibilities Cybersecurity Analyst Monitors the organization’s IT infrastructure and identifies and evaluates threats that could result in security incidents Help Desk/Support Resolves end-user and system technical or operations problems Incident Responder Investigates, analyzes, and responds to cyber incidents within the organization IT Engineer Performs the day-to-day operational duties on systems and applications Network Administrator Installs and maintains the local area network/wide area network (LAN/WAN) environment Security Architect Assesses security controls and recommends and implements enhancements Security Director Develops and enforces security policies and processes to maintain the security and safety of all organizational assets Security Manager Implements security policies and monitors security operations Software Developer Develops and maintains production software System Administrator Installs and maintains specific systems (e.g., database, e-mail) Threat Hunter Proactively finds cybersecurity threats and mitigates them before they compromise the organization Table 20-1 Roles and Associated Tasks Chapter 20: Managing Security Operations 887 SecOps In many organizations the security and IT operations teams become misaligned because their responsibilities have different (and oftentimes conflicting) focuses. The operations staff is responsible for ensuring systems are operational, highly avail- able, performing well, and providing users with the functionality they need. As new technology becomes available, they come under pressure by business leaders to deploy it as soon as possible to improve the organization’s competitiveness. But many times this focus on operations and user functionality comes at the cost of security. Security mechanisms commonly decrease performance, delay provisioning, and reduce the functionality available to the users. The conflicts between the priorities and incentives of the IT operations and security teams can become dysfunctional in many organizations. Many of us have witnessed the finger pointing and even outright hostility that can crop up when things go wrong. A solution that is catching on is SecOps (Security + Operations), which is the integration of security and IT operations people, technology, and processes to reduce risks while improving business agility. The goal is to create a culture in which security is baked into the entire life cycle of every system and process in the organization. This is accomplished by building multifunctional teams where, for instance, a cloud system administrator and a cloud security engineer work together under the leadership of a manager who is responsible for delivering agile and secure functionality to the organization. Accountability Users’ access to resources must be limited and properly controlled to ensure that exces- sive privileges do not provide the opportunity to cause damage to an organization and its resources. Users’ access attempts and activities while using a resource need to be properly monitored, audited, and logged. The individual user ID needs to be included in the audit logs to enforce individual responsibility. Each user should understand his responsibility when using organizational resources and be accountable for his actions. Capturing and monitoring audit logs helps determine if a violation has actually occurred or if system and software reconfiguration is needed to better capture only the activities that fall outside of established boundaries. If user activities were not captured and reviewed, it would be very hard to determine if users have excessive privileges or if there has been unauthorized access. PART VII Auditing needs to take place in a routine manner. Also, security analysts and managers need to review audit and log events. If no one routinely looks at the output, there really is no reason to create logs. Audit and function logs often contain too much cryptic or mundane information to be interpreted manually. This is why products and services are available that parse logs for organizations and report important findings. Logs should be monitored and reviewed, through either manual or automatic methods, to uncover suspicious activity and to identify an environment that is shifting away from its original CISSP All-in-One Exam Guide 888 baselines. This is how administrators can be warned of many problems before they become too big and out of control. When reviewing events, administrators need to ask certain questions that pertain to the users, their actions, and the current level of security and access: Are users accessing information and performing tasks that are not necessary for their job description? The answer indicates whether users’ rights and permissions need to be reevaluated and possibly modified. Are repetitive mistakes being made? The answer indicates whether users need to have further training. Do too many users have rights and privileges to sensitive or restricted data or resources? The answer indicates whether access rights to the data and resources need to be reevaluated, whether the number of individuals accessing them needs to be reduced, and/or whether the extent of their access rights should be modified. Need-to-Know/Least Privilege Least privilege (one of the secure design principles introduced in Chapter 9) means an individual should have just enough permissions and rights to fulfill her role in the orga- nization and no more. If an individual has excessive permissions and rights, it could open the door to abuse of access and put the organization at more risk than is necessary. For example, if Dusty is a technical writer for a company, he does not necessarily need to have access to the company’s source code. So, the mechanisms that control Dusty’s access to resources should not let him access source code. Another way to protect resources is enforcing need to know, which means we must first establish that an individual has a legitimate, job role–related need for a given resource. Least privilege and need to know have a symbiotic relationship. Each user should have a need to know about the resources that she is allowed to access. If Mikela does not have a need to know how much the company paid last year in taxes, then her system rights should not include access to these files, which would be an example of exercising least privilege. The use of identity management software that combines traditional directories; access control systems; and user provisioning within servers, applications, and systems is becoming the norm within organizations. This software provides the capabilities to ensure that only specific access privileges are granted to specific users, and it often includes advanced audit functions that can be used to verify compliance with legal and regulatory directives. Separation of Duties and Responsibilities The objective of separation of duties (another of the secure design principles introduced in Chapter 9) is to ensure that one person acting alone cannot compromise the organiza- tion’s security in any way. High-risk activities should be broken up into different parts and distributed to different individuals or departments. That way, the organization does not need to put a dangerously high level of trust in certain individuals. For fraud to take place, collusion would need to be committed, meaning more than one person would have to be Chapter 20: Managing Security Operations 889 involved in the fraudulent activity. Separation of duties, therefore, is a preventive measure that requires collusion to occur for someone to commit an act that is against policy. Separation of duties helps prevent mistakes and minimize conflicts of interest that can take place if one person is performing a task from beginning to end. For instance, a programmer should not be the only one to test her own code. Another person with a different job and agenda should perform functionality and integrity testing on the programmer’s code, because the programmer may have a focused view of what the program is supposed to accomplish and thus may test only certain functions and input values, and only in certain environments. Another example of separation of duties is the difference between the functions of a computer user and the functions of a security administrator. There must be clear- cut lines drawn between system administrator duties and computer user duties. These will vary from environment to environment and will depend on the level of security required within the environment. System and security administrators usually have the responsibility of installing and configuring software, performing backups and recovery procedures, setting permissions, adding and removing users, and developing user profiles. The computer user, on the other hand, may set or change passwords, create/edit/delete files, alter desktop configurations, and modify certain system parameters. The user should not be able to modify her own security profile, add and remove users globally, or make critical access decisions pertaining to network resources. This would breach the concept of separation of duties. Privileged Account Management Separation of duties also points to the need for privileged account management processes that formally enforce the principle of least privilege. A privileged account is one with elevated rights. When we hear this term, we usually think of system administrators, but it is important to consider that privileges often are gradually attached to user accounts for legitimate reasons but never reviewed again to see if they’re still needed. In some cases, regular users end up racking up significant (and risky) permissions without anyone being aware of it (known as authorization creep). More commonly, you will hear this concept under the label of privileged account management (PAM) because many organizations have very granular, role-based access controls. PAM consists of the policies and technologies used by an organization to control elevated (or privileged) access to any asset. It consists of processes for addressing the needs for individual elevated privileges, periodically reviewing those needs, reducing them to least privilege when appropriate, and documenting the whole thing. PART VII Job Rotation Job rotation means that, over time, more than one person fulfills the tasks of one position within the organization. This enables the organization to have more than one person who understands the tasks and responsibilities of a specific job title, which provides backup and redundancy if a person leaves the organization or is absent. Job rotation also helps identify fraudulent activities, and therefore can be considered a detective type CISSP All-in-One Exam Guide 890 of control. If Keith has performed David’s position, Keith knows the regular tasks and routines that must be completed to fulfill the responsibilities of that job. Thus, Keith is better able to identify whether David does something out of the ordinary and suspicious. A related practice is mandatory vacations. Chapter 1 touched on reasons to make sure employees take their vacations. Reasons include being able to identify fraudulent activities and enabling job rotation to take place. If an accounting employee has been performing a “salami attack” by shaving off pennies from multiple accounts and putting the money into his own account, the employee’s company would have a better chance of figuring this out if that employee is required to take a vacation for a week or longer. When the employee is on vacation, another employee has to fill in. She might uncover questionable documents and clues of previous activities, or the company may see a change in certain patterns once the employee who is committing fraud is gone for a week or two. It is best for auditing purposes if the employee takes two contiguous weeks off from work, which allows more time for fraudulent evidence to appear. Again, the idea behind mandatory vacations is that, traditionally, those employees who have committed fraud are usually the ones who have resisted going on vacation because of their fear of being found out while away. Service Level Agreements As we discussed briefly in Chapter 2, a service level agreement (SLA) is a contractual agreement that states that a service provider guarantees a certain level of service. For example, a web server will be down for no more than 52 minutes per year (which is approximately a 99.99 percent availability). SLAs help service providers, whether they are an internal IT operation or an outsourcer, decide what type of availability technol- ogy is appropriate. From this determination, the price of a service or the budget of the IT operation can be set. Most frequently, organizations use SLAs with external service providers to guarantee specific performance and, if it is not delivered, to penalize (usually monetarily) the vendor. The process of developing an internal SLA (that is, one between the IT operations team and one or more internal departments) can also be beneficial to an organization. For starters, it drives a deeper conversation between IT and whoever is requesting the service. This alone can help both sides get a clearer understanding of the opportunities and threats the service brings with it. The requestor will then better understand the tradeoffs between service levels and costs and be able to negotiate the most cost-effective service with the IT team. The IT team can then use this dialogue to justify resources such as budget or staffing. Finally, internal SLAs allow all parties to know what “right” looks like. Whether the SLA is internal or external, the organization must collect metrics to determine whether or not it is being met. After all, if nobody measures the service, what’s the point of requiring a certain level of it? Identifying these metrics, in and of itself, allows the organization to determine whether a particular requirement is important or not. If both parties are having a hard time figuring out how much scheduled downtime is acceptable, that requirement probably doesn’t need to be included in the SLA. Chapter 20: Managing Security Operations 891 Change Management The Greek philosopher Heraclitus said that “the only constant in life is change,” and most of us would agree with him, especially when it comes to IT and security operations in our organizations. Change is needed to remain relevant and competitive, but it can bring risks that we must carefully manage. Change management, from an IT perspective, is the practice of minimizing the risks associated with the addition, modification, or removal of anything that could have an effect on IT services. This includes obvious IT actions like adding new software applications, segmenting LANs, and retiring network services. But it also includes changes to policies, procedures, staffing, and even facili- ties. Consequently, any change to security controls or practices probably falls under the umbrella of change management. Change Management Practices Well-structured change management practices are essential to minimizing the risks of changes to an environment. The process of devising these practices should include rep- resentatives for all stakeholders, so it shouldn’t just be limited to IT and security staff. Most organizations that follow this process formally establish a group that is responsible for approving changes and overseeing the activities of changes that take place within the organization. This group can go by one of many names, but for this discussion we will refer to it as the change advisory board (CAB). The CAB and change management practices should be laid out in the change management policy. Although the types of changes vary, a standard list of procedures can help keep the process under control and ensure it is carried out in a predictable manner. The following steps are examples of the types of procedures that should be part of any change management policy: Request for a change to take place The individual requesting the change must do so in writing, justify the reasons, clearly show the benefits and possible pitfalls of (that is, risk introduced by) the change. The Request for Change (RFC) is the standard document for doing this and contains all information required to approve a change. Evaluate the change The CAB reviews the RFC and analyzes its potential impacts across the entire organization. Sometimes the requester is asked to conduct more research and provide more information before the change is approved. The CAB then completes a change evaluation report and designates the individual or team responsible for planning and implementing the change. PART VII Plan the change Once the change is approved, the team responsible for implementing it gets to work planning the change. This includes figuring out all the details of how the change interfaces with other systems or processes, developing a timeline, and identifying specific actions to minimize the risks. The change must also be fully tested to uncover any unforeseen results. Regardless of how well we test, there is always a chance that the change will cause an unacceptable loss or outage, so every change request should also have a rollback plan that restores the system to the last known-good configuration. CISSP All-in-One Exam Guide 892 Implementation Once the change is planned and fully tested, it is implemented and integrated into any other affected processes and systems. This may include reconfiguring other systems, changing or developing policies and procedures, and providing training for affected staff. These steps should be fully documented and progress should be monitored. Review the change Once the change is implemented, it is brought back to the CAB for a final review. During this step, the CAB verifies that the change was implemented as planned, that any unanticipated consequences have been properly addressed, and that the risks remain within tolerable parameters. Close or sustain Once the change is implemented and reviewed, it should be entered into a change log. A full report summarizing the change may also be submitted to management, particularly for changes with large effects across the organization. These steps, of course, usually apply to large changes that take place within an organization. These types of changes are typically expensive and can have lasting effects on an organization. However, smaller changes should also go through some type of change control process. If a server needs to have a patch applied, it is not good practice to have an engineer just apply it without properly testing it on a nonproduction server, without having the approval of the IT department manager or network administrator, and without having backup and backout plans in place in case the patch causes some negative effect on the production server. Of course, these changes still need to be documented. For this reason, ITIL 4 (introduced in Chapter 4) specifies three types of changes that follow the same basic process but tailored for specific situations: Standard changes Preauthorized, low-risk changes that follow a well-known procedure. Examples include patching a server or adding memory or storage to it. Emergency changes Changes that must be implemented immediately. Examples include implementing a security patch for a zero-day exploit or isolating the network from a DDoS attack. Normal changes All other changes that are not standard changes or emergency changes. Examples include adding a server that will provide new functionality or introducing a new application to (or removing a legacy one from) the golden image. Regardless of the type of change, it is critical that the operations department create approved backout plans before implementing changes to systems or the network. It is very common for changes to cause problems that were not properly identified before the implementation process began. Many network engineers have experienced the headaches of applying poorly developed “fixes” or patches that end up breaking something else in the system. Developing a backout plan ensures productivity is not negatively affected by these issues. This plan describes how the team will restore the system to its original state before the change was implemented. Chapter 20: Managing Security Operations 893 Change Management Documentation Failing to document changes to systems and networks is only asking for trouble, because no one will remember, for example, what was done to that one server in the demilitarized zone (DMZ) six months ago or how the main router was fixed when it was acting up last year. Changes to software configurations and network devices take place pretty often in most environments, and keeping all of these details properly organized is impossible, unless someone maintains a log of this type of activity. Numerous changes can take place in an organization, some of which are as follows: New computers installed New applications installed Different configurations implemented Patches and updates installed New technologies integrated Policies, procedures, and standards updated New regulations and requirements implemented Network or system problems identified and fixes implemented Different network configurations implemented New networking devices integrated into the network Company acquired by, or merged with, another company The list could go on and on and could be general or detailed. Many organizations have experienced some major problem that affects the network and employee productivity. The IT department may run around trying to figure out the issue and go through hours or days of trial-and-error exercises to find and apply the necessary fix. If no one properly documents the incident and what was done to fix the issue, the organization may be doomed to repeat the same scramble six months to a year down the road. Configuration Management At every point in the O&M part of assets’ life cycles (which we discussed in Chapter 5), we need to also ensure that we get (and keep) a handle on how these assets are config- ured. Sadly, most default configurations are woefully insecure. This means that if we do not configure security when we provision new hardware or software, we are virtually PART VII guaranteeing successful attacks on our systems. Configuration management (CM) is the process of establishing and maintaining consistent configurations on all our systems to meet organizational requirements. Configuration management processes vary among organizations but have certain elements in common. Virtually everyone that practices it starts off by defining and establishing organization-wide agreement on the required configurations for all systems in the scope of the effort. At a minimum, this should include the users’ workstations CISSP All-in-One Exam Guide 894 and all business-critical systems. These configurations are then applied to all systems. There will be exceptions, of course, and special requirements that lead to nonstandard configurations, which need to be approved by the appropriate individuals and documented. There will also be changes over time, which should be dealt with through the change management practices defined in the previous section. Finally, configurations need to be periodically audited to ensure continued compliance with them. Baselining A baseline is the configuration of a system at a point in time as agreed upon by the appro- priate decision makers. For a typical user workstation, a baseline defines the software that is installed (both operating system and applications), policies that are applied (e.g., disabling USB thumb drives), and any other configuration setting such as the domain name, DNS server address, and many others. Baselining allows us to build a system once, put it through a battery of tests to ensure it works as expected, and then provision it out consistently across the organization. In a perfect world, all systems that provide the same functionality are configured identically. This makes it easier to manage them throughout their life cycles. As we all know, however, there are plenty of exceptions in the real world. System configuration exceptions often have perfectly legitimate business reasons, so we can’t just say “no” to exception requests and keep our lives simple. The system baseline allows us to narrow down what makes these exceptional systems different. Rather than document every single configuration parameter again (which could introduce errors and omissions), all we have to do is document what is different from a given baseline. Baselines do more than simply tell us what systems (should) look like at a given point in time; they also document earlier configuration states for those systems. We want to keep old baselines around because they tell the story of how a system evolved. Properly annotated, baselines tell us not only the “what” but also the “why” of configurations over time. A related concept to baselining is the golden image, which is a preconfigured, standard template from which all user workstations are provisioned. A golden image is known by many other names including gold master, clone image, master image, and base image. Whatever name you use, it saves time when provisioning systems because all you have to do is clone the image onto a device, enter a handful of parameters unique to the system (such as the hostname), and it’s ready for use. Golden images also improve security by consistently applying security controls to every cloned system. Another advantage is a reduction in configuration errors, which also means a lower risk of inadvertently introduced vulnerabilities. Provisioning We already addressed secure provisioning in Chapter 5 but the topic bears revisiting in the context of configuration management. Recall that provisioning is the set of all activi- ties required to provide one or more new information services to a user or group of users (“new” meaning previously not available to that user or group). Technically, provisioning and Chapter 20: Managing Security Operations 895 Configuration Management vs. Change Management Change management is a business process aimed at deliberately regulating the changing nature of business activities such as projects or IT services. It is concerned with issues such as changing the features in a system being developed or changing the manner in which remote workers connect to the internal network. While IT and security personnel are involved in change management, they are usually not in charge of it. Configuration management is an operational process aimed at ensuring that con- trols are configured correctly and are responsive to the current threat and operational environments. As an information security professional, you would likely lead in con- figuration management but simply participate in change management processes. configuration are two different but related activities. Provisioning generally entails acquir- ing, installing, and launching a new service. Depending on how this is done, that service may still need to be configured (and possibly even baselined). Automation As you can imagine, configuration management requires tracking and updating a lot of information on many different systems. This is why mature organizations leverage automation for many of the required tasks, including maintaining individual configu- ration items in a configuration management database (CMDB). The CMDB can store information about all organizational assets, their baselines, and their relationships to one another. Importantly, a CMDB provides versioning so that, if a configuration error is made, reverting to a previous baseline is easy. More elaborate automation tools are capable of not only tracking configurations but also provisioning systems that implement them. Perhaps the best-known tool in this regard, particularly for virtualized or cloud infrastructures, is Ansible, which is an open- source configuration management, deployment, and orchestration tool. Through the use of playbooks written in YAML (which, recursively, stands for “YAML Ain’t Markup Language”), Ansible allows automated asset provisioning and configuration. Resource Protection In Chapter 5, we defined assets as anything of worth to the organization. A related con- PART VII cept is a resource, which is anything that is required to perform an activity or accomplish a goal. So, a resource can also be an asset if you own it and it has inherent value to you. In the context of security operations, a resource is anything the organization needs to accomplish any of its tasks. This includes hardware, software, data, and the media on which the last two are stored. CISSP All-in-One Exam Guide 896 EXAM TIP Though assets and resources are, technically, slightly different things, you should treat them as synonymous in the exam. We will discuss how to protect hardware resources later in this chapter when we cover physical security. Though we already covered software, data, and media protections in Chapter 6, the topic is worth revisiting as it applies to managing security operations. There are three types of digital resources that are of particular interest in this regard: system images, source files, and backups. System Images Because system images are essential to efficiently provisioning systems, they are a key resource both during normal operations and when we are responding to a security inci- dent. Presumably, the images we use to clone new (or replacement) systems are secure because (as a best practice) we put a lot of work into hardening them and ensuring they contain no known vulnerabilities. However, if adversaries were able to modify the images so as to introduce vulnerabilities, they would have free access to any system provisioned using the tainted images. Similarly, if the images were destroyed (deliberately, acciden- tally, or through an act of nature), recovering from a large-scale incident would be much more difficult and time-consuming. Source Files If the images were unavailable or otherwise compromised, we would have to rebuild everything from scratch. There are also cases in which we just need to install specific soft- ware. Either way, we need reliable source files. Source files contain the code that executes on a computer to provide applications or services. This code can exist in either execut- able form or as a sequence of statements in a high-level language such as C/C++, Java, or Python. Either way, it is possible for adversaries to insert malicious code into source files so that any system provisioned using them will be vulnerable. Worse yet, if you work for a software company with clients around the world, your company may be a much more interesting target for advanced persistent threats (APTs) who may want to compromise your software to breach your customers. This kind of software supply-chain attack is best exemplified by the SolarWinds attack of 2020. Even if your organization is not likely to be targeted by APTs, you are probably concerned about ransomware attacks. Having good backups is the key to quickly recovering from ransomware (without having to pay the ransom), but it hinges on the integrity and availability of the backup data. Many cybercriminals deliberately look for backups and encrypt them also to force their victims to pay the ransom. Backups Backing up software and having backup hardware devices are two large parts of network availability. You need to be able to restore data if a hard drive fails, a disaster takes place, or some type of software corruption occurs. Chapter 20: Managing Security Operations 897 Every organization should develop a policy that indicates what gets backed up, how often it gets backed up, and how these processes should occur. If users have important information on their workstations, the operations department needs to develop a method that indicates that backups include certain directories on users’ workstations or that users move their critical data to a server share at the end of each day to ensure it gets backed up. Backups may occur once or twice a week, every day, or every three hours. It is up to the organization to determine this interval. The more frequent the backups, the more resources will be dedicated to it, so there needs to be a balance between backup costs and the actual risk of potentially losing data. An organization may find that conducting automatic backups through specialized software is more economical and effective than spending IT work-hours on the task. The integrity of these backups needs to be checked to ensure they are happening as expected— rather than finding out right after two major servers blow up that the automatic backups were saving only temporary files. PART VII Protecting Backups from Ransomware The best way to minimize your risks due to ransomware is to have effective backups that are beyond the reach of the cybercriminals and can quickly restore affected systems. This means putting the greatest distance (and security controls) possible between a system CISSP All-in-One Exam Guide 898 and its backups. Obviously, you should never store backups on the system itself or on a directly connected external drive. The following are some tips on how to keep your backups away from threat actors: Use a different OS for your backup server. Most ransomware today targets a single type of OS (mostly Windows). Even if the attack is not automated, threat actors are likelier to be proficient in whatever OS they are attacking, so having your backups managed by a system running a different OS automatically gives you a leg up. Get your backups out of town. Whatever you do, make sure your backups are not on a drive that is directly attached to the asset you are protecting, or even on the same LAN segment (like in the same data center). The more distance, the better, especially if you can layer controls like ACLs or even use data diodes. We know of data so sensitive that its backups are physically transported to other states or countries periodically. Go old school. Consider using older technologies like optical discs and magnetic tapes. You may get some weird looks from your early-adopter colleagues, but you may save the day when things go sideways on you. Protect your backups like your career depends on it. (It may!) Stay up to date on the latest techniques cybercriminals are using to attack backups and ensure you have adequate controls in place to prevent them from being effective. Hierarchical Storage Management Hierarchical storage management (HSM) provides continuous online backup functional- ity. It combines hard disk technology with the cheaper and slower optical or tape juke- boxes. The HSM system dynamically manages the storage and recovery of files, which are copied to storage media devices that vary in speed and cost. The faster media holds the files that are accessed more often, and the seldom-used files are stored on the slower devices, or near-line devices, as shown in Figure 20-1. The storage media could include optical discs, magnetic disks, and tapes. This functionality happens in the background without the knowledge of the user or any need for user intervention. HSM works, according to tuning based on the trade-off between the cost of storage and the availability of information, by migrating the actual content of less used files to lower-speed, lower-cost storage, while leaving behind a “stub,” which looks to the user like it contains the full data of the migrated file. When the user or an application accesses the stub, the HSM uses the information in the stub to find the real location of the information and then retrieve it transparently for the user. This type of technology was created to save money and time. If all data was stored on hard drives, that would be expensive. If a lot of the data was stored on tapes, it would take too long to retrieve the data when needed. So HSM provides a terrific approach by providing you with the data you need, when you need it, without having to bother the administrator to track down some tape or optical disc. Chapter 20: Managing Security Operations 899 Drive Drive Drive A B C Optical Secondary stage Tape Tertiary stage Figure 20-1 HSM provides an economical and efficient way of storing data. Backups should include the underlying operating system and applications, as well as PART VII the configuration files for both. Systems are attached to networks, and network devices can experience failures and data losses as well. Data loss of a network device usually means the configuration of the network device is lost completely (and the device will not even boot up), or that the configuration of the network device reverts to defaults (which, though it will boot up, does your network little good). Therefore, the configurations of network and other nonsystem devices (for example, the phone system) in the environment are also necessary. CISSP All-in-One Exam Guide 900 Vulnerability and Patch Management Dealing with new vulnerabilities and their corresponding patches is an inevitability in cybersecurity. The trick is to deal with these in an informed and deliberate manner. While the following sections treat vulnerability management and patch management separately, it is important to consider them as two pieces of the same puzzle in real life. We may learn of a new vulnerability for which a patch does not yet exist. Equally bad would be applying a patch that brings down a critical business system. For these reasons (among many others), we should manage vulnerabilities and patches in a synchronized and coordinated manner across our organizations. Vulnerability Management No sufficiently complex information system can ever be completely free of vulnerabili- ties. Vulnerability management is the cyclical process of identifying vulnerabilities, deter- mining the risks they pose to the organization, and applying security controls that bring those risks to acceptable levels. Many people equate vulnerability management with periodically running a vulnerability scanner against their systems, but the process must include more than just that. Vulnerabilities exist not only in software, which is what the scanners assess, but also in business processes and in people. Flawed business processes, such as sharing proprietary information with parties who have not signed a nondisclosure agreement (NDA), cannot be detected by vulnerability scanners. Nor can they detect users who click malicious links in e-mails. What matters most is not the tool or how often it is run, but having a formal process that looks at the organization holistically and is closely tied to the risk management process. Vulnerability management is part of our risk management process. We identify the things that we have that are of value to us and the threat actors that might take those away from us or somehow interfere with our ability to benefit from them. Then we figure out how these actors might go about causing us losses (in other words, exploiting our vulnerabilities) and how likely these events might be. As we discussed in Chapter 2, this gives us a good idea of our risk exposure. The next step is to decide which of those risks we will address and how. The “how” is typically through the application of a security control. Recall that we can never bring our risk to zero, which means we will always have vulnerabilities for which we have no effective controls. These unmitigated risks exist because we think the chance of them being realized or their impact on the organization (or both) is low enough for the risk to be tolerable. In other words, the cost of mitigating the risk is not worth the return on our investment. For those risks, the best we can do is continually monitor for changes in their likelihood or potential impact. As you can see, vulnerability management is all about finding vulnerabilities, understanding their impact on the organization, and determining what to do about them. Since information system vulnerabilities can exist in software, processes, or people, it is worthwhile to discuss how we implement and support vulnerability management in each of these areas. Chapter 20: Managing Security Operations 901 Software Vulnerabilities Vulnerabilities are usually discovered by security researchers who notify vendors and give them some time (at least two weeks) to work on a patch before the researchers make their findings public. This is known as responsible or ethical disclosure. The Computer Emergency Response Team Coordination Center (CERT/CC) is the main clearinghouse for vulnerability disclosures. Once a vulnerability is discovered, vulnerability scanner vendors release plug-ins for their tools. These plug-ins are essentially simple programs that look for the presence of one specific flaw. NOTE Some organizations have their own in-house vulnerability research capability or can write their own plug-ins. In our discussion, we assume the more general case in which vulnerability scanning is done using third-party commercial tools whose licenses include subscriptions to vulnerability feeds and related plug-ins. As previously mentioned, software vulnerability scanning is what most people think of when they hear the term vulnerability management. Scanning is simply a common type of vulnerability assessment that can be divided into four phases: 1. Prepare First, you have to determine the scope of the vulnerability assessment. What are you testing and how? Having defined the scope, you schedule the event and coordinate it with affected asset and process owners to ensure it won’t interfere with critical business processes. You also want to ensure you have the latest vulnerability signatures or plug-ins for the systems you will be testing. 2. Scan For best results, the scan is automated, follows a script, and happens outside of the regular hours of operation for the organization. This reduces the chance that something goes unexpectedly wrong or that you overlook a system. During the scan, it is helpful to monitor resource utilization (like CPU and bandwidth) to ensure you are not unduly interfering with business operations. 3. Remediate In a perfect world, you don’t find any of the vulnerabilities for which you were testing. Typically, however, you find a system that somehow slipped through the cracks, so you patch it and rescan just to be sure. Sometimes, however, there are legitimate business reasons why a system can’t be patched (at least right away), so remediation may require deploying a compensating control or (in the worst case) accepting the risk as is. 4. Document This important phase is often overlooked because some PART VII organizations rely on the reports that are automatically generated by the scanning tools. These reports, however, don’t normally include important details like why a vulnerability may intentionally be left unpatched, the presence of compensating controls elsewhere, or the need for more/less frequent scanning of specific systems. Proper documentation ensures that assumptions, facts, and decisions are preserved to inform future decisions. CISSP All-in-One Exam Guide 902 Process Vulnerabilities A process vulnerability exists whenever there is a flaw or weakness in a business process, independent of the use of automation. For example, suppose a user account provisioning process requires only an e-mail from a supervisor asking for an account for the new hire. Since e-mail messages can be spoofed, a threat actor could send a fake e-mail impersonat- ing a real supervisor. If the system administrator creates the account and responds with the new credentials, the adversary would now have a legitimate account with whatever authorizations were requested. Process vulnerabilities frequently are overlooked, particularly when they exist at the intersection of multiple departments within the organization. In the example, the account provisioning process vulnerability exists at the intersection of a business area (where the fictitious user will supposedly work), IT, and human resources. A good way to find process vulnerabilities is to periodically review existing processes using a red team. As introduced in Chapter 18, a red team is a group of trusted individuals whose job is to look at something from an adversary’s perspective. Red teaming is useful in many contexts, including identifying process vulnerabilities. The red team’s task in this context would be to study the processes, understand the organization’s environment, and then look for ways to violate its security policies. Ideally, red team exercises should be conducted whenever any new process is put in place. Realistically, however, these events take place much less frequently (if at all). NOTE The term red team exercise is often used synonymously with penetration test. In reality, a red team exercise can apply to any aspect of an organization (people, processes, facilities, products, ideas, information systems) and aims to emulate the actions of threat actors seeking specific objectives. A penetration test, on the other hand, is focused on testing the effectiveness of security controls in facilities and/or information systems. Human Vulnerabilities By many accounts, over 90 percent of security incidents can be traced back to a mem- ber of an organization doing something they shouldn’t have, maliciously or otherwise. This implies that if your vulnerability management is focused exclusively on hardware and software systems, you may not be reducing your attack surface by much. A com- mon approach to managing human vulnerabilities is social engineering assessments. We briefly introduced social engineering in Chapter 18 as a type of attack but return to it now as a tool in your vulnerability management toolkit. Chris Hadnagy, one of the world’s leading experts on the subject, defines social engineering as “the act of manipulating a person to take an action that may or may not be in the ‘target’s’ best interest.” A social engineering assessment involves a team of trained personnel attempting to exploit vulnerabilities in an organization’s staff. This could result in targets revealing sensitive information, allowing the social engineers into restricted areas, clicking malicious links, or plugging into their computer a thumb drive laden with malware. Chapter 20: Managing Security Operations 903 A social engineering assessment, much like its nefarious counterpart, consists of three phases: 1. Open-source intelligence (OSINT) collection Before manipulating a target, the social engineer needs to learn as much as possible about that person. This phase is characterized by searches for personal information in social media sites; web searches; and observation, eavesdropping, and casual conversations. Some OSINT tools allow quick searches of a large number of sources for information on specific individuals or organizations. 2. Assessment planning The social engineer could go on gathering OSINT forever but at some point (typically very quickly) will have enough information to formulate a plot to exploit one or more targets. Some people respond emotionally to certain topics, while others may best be targeted by impersonating someone in a position of authority. The social engineer identifies the kinds of engagements, topics, and pretexts that are likeliest to work against one or more targets. 3. Assessment execution Regardless of how well planned an assessment may be, we know that no plan survives first contact. Social engineers have to think quickly on their feet and be very perceptive of their targets’ states of mind and emotions. In this phase, they engage targets through some combination of personal face-to-face, telephonic, text, or e-mail exchange and persuade them to take some action that compromises the security of the organization. Rarely is a social engineering assessment not effective. At the end of the event, the assessors report their findings and use them to educate the organization on how to avoid falling for these tricks. Perhaps the most common type of assessment is in the form of phishing, but a real human vulnerability assessment should be much more comprehensive. Patch Management According to NIST Special Publication 800-40, Revision 3, Guide to Enterprise Patch Management Technologies, patch management is “the process for identifying, acquiring, installing, and verifying patches for products and systems.” Patches are software updates intended to remove a vulnerability or defect in the software, or to provide new features or functionality for it. Patch management is, at least in a basic way, an established part of organizations’ IT or security operations already. PART VII Unmanaged Patching One approach to patch management is to use a decentralized or unmanaged model in which each software package on each device periodically checks for updates and, if any are available, automatically applies them. While this approach may seem like a simple CISSP All-in-One Exam Guide 904 solution to the problem, it does have significant issues that could render it unacceptably risky for an organization. Among these risks are the following: Credentials Installing patches typically requires users to have admin credentials, which violates the principle of least privilege. Configuration management It may be difficult (or impossible) to attest to the status of every application in the organization, which makes configuration management much more difficult. Bandwidth utilization Having each application or service independently download the patches will lead to network congestion, particularly if there is no way to control when this will happen. Service availability Servers are almost never configured to automatically update themselves because this could lead to unscheduled outages that have a negative effect on the organization. There is almost no advantage to decentralized patch management, except that it is better than doing nothing. The effort saved by not having management overhead is more than balanced by the additional effort you’ll have to put into responding to incidents and solving configuration and interoperability problems. Still, there may be situations in which it is not possible to actively manage some devices. For instance, if your users are allowed to work from home using personal devices, then it would be difficult to implement the centralized approach we discuss next. In such situations, the decentralized model may be the best to take, provided you also have a way to periodically (say, each time users connect back to the mother ship) check the status of their updates. Centralized Patch Management Centralized patch management is considered a best practice for security operations. There are multiple approaches to implementing it, however, so you must carefully con- sider the pluses and minuses of each. The most common approaches are Agent based An update agent is installed on each device. This agent communicates with one or more update servers and compares available patches with software and versions on the local host, updating as needed. Agentless One or more hosts remotely connect to each device on the network using admin credentials and check the remote device for needed updates. A spin on this is the use of Active Directory objects in a domain controller to manage patch levels. Passive Depending on the fidelity that an organization requires, it may be possible to passively monitor network traffic to infer the patch levels on each networked application or service. While minimally intrusive to the end devices, this approach is also the least effective since it may not always be possible to uniquely identify software versions through their network traffic artifacts. Regardless of the approach you take, you want to apply the patches as quickly as pos- sible. After all, every day you delay is an extra day that your adversaries have to exploit Chapter 20: Managing Security Operations 905 your vulnerabilities. The truth is that you can’t (or at least shouldn’t) always roll out the patch as soon as it comes out. There is no shortage of reports of major outages caused by rolling out patches without first testing their effects. Sometimes the fault lies with the vendor, who, perhaps in its haste to remove a vulnerability, failed to properly test that the patch wouldn’t break any other functionality of the product. Other times the patch may be rock solid and yet have a detrimental second- or third-order effect on other systems on your hosts or networks. This is why testing the patch before rolling it out is a good idea. Virtualization technologies make it easier to set up a patch test lab. At a minimum, you want to replicate your critical infrastructure (e.g., domain controller and production servers) in this virtual test environment. Most organizations also create at least one virtual machine (VM) that mimics each deployed operating system, with representative services and applications. NOTE It is often possible to mitigate the risk created by a software vulnerability using other controls, such as rules for your firewalls, intrusion detection system (IDS), or intrusion protection system (IPS). This can buy time for you to test the patches. It also acts as a compensatory control. Whether or not you are able to test the patches before pushing them out (and you really should), it is also a good idea to patch your subnets incrementally. It may take longer to get to all systems, but if something goes wrong, it will only affect a subset of Reverse Engineering Patches Zero-day exploits are able to successfully attack vulnerabilities that are not known to the software vendor or users of its software. For that reason, zero-day exploits are able to bypass the vast majority of controls such as firewalls, antimalware, and IDS/IPS. Though zero-day exploits are very powerful, they are also exceptionally hard to develop and very expensive to buy in the underground markets. There is an easier and cheaper way for attackers to exploit recent vulnerabilities, and that is by reverse engineering the software patches that vendors push out. This approach takes advantage of the delay between a patch being available and it getting pushed to all the vulnerable computers in the organization. If the attacker can reverse engineer the patch faster than the defenders use it to update all computers, then the attacker wins. Some vendors are mitigating this threat by using code obfuscation, which, in an ironic turn of events, is a technique developed by attackers almost 30 years ago in an effort to thwart the then simple pattern-matching approach of antimalware solutions. PART VII Even with code obfuscation, it is just a matter of time before the bad guys figure out what the vulnerability is. This puts pressure on the defenders to roll out the patches across the entire organization as quickly as possible. In this haste, organizations sometimes overlook problem indicators. Add to this a healthy application of Murphy’s law and you see why it is imperative to have a way to deal with these unknowns. A rollback plan (previously discussed in the “Change Management” section of this chapter) describes the steps by which a change is reversed in order to restore functionality or integrity. CISSP All-in-One Exam Guide 906 your users and services. This gradual approach to patching also serves to reduce network congestion that could result from all systems attempting to download patches at the same time. Obviously, the benefits of gradual patching need to be weighed against the additional exposure that the inherent delays will cause. Physical Security We already discussed physical security in Chapter 10, but our focus then was on the design of sites and facilities. The CISSP CBK breaks physical security into design, which falls under Domain 3 (Security Architecture and Engineering), and operations, which falls in the current Domain 7 (Security Operations). We follow the same approach here. As with any other defensive technique, physical security should be implemented using the defense-in-depth secure design principle. For example, before an intruder can get to the written recipe for your company’s secret barbeque sauce, she will need to climb or cut a fence, slip by a security guard, pick a door lock, circumvent a biometric access control reader that protects access to an internal room, and then break into the safe that holds the recipe. The idea is that if an attacker breaks through one control layer, there will be others in her way before she can obtain the company’s crown jewels. NOTE It is also important to have a diversity of controls. For example, if one key works on four different door locks, the intruder has to obtain only one key. Each entry should have its own individual key or authentication combination. This defense model should work in two main modes: one mode during normal facility operations and another mode during the time the facility is closed. When the facility is closed, all doors should be locked with monitoring mechanisms in strategic positions to alert security personnel of suspicious activity. When the facility is in operation, security gets more complicated because authorized individuals need to be distinguished from unauthorized individuals. Perimeter security controls deal with facility and personnel access controls and with external boundary protection mechanisms. Internal security controls deal with work area separation and personnel badging. Both perimeter and internal security also address intrusion detection and corrective actions. The following sections describe the elements that make up these categories. External Perimeter Security Controls Your first layer of defense is your external perimeter. This could be broken down into distinct, concentric areas of increasing security. Let’s consider an example taken from the Site Security Design Guide, published by the U.S. General Services Administration (GSA) Public Buildings Service, which is shown in Figure 20-2. In it, we see the entire site is fenced off, which actually creates two security zones: the (external) neighborhood (zone 1) and the standoff perimeter (zone 2). Depending on risk levels, the organization may want to restrict site access and parking by creating a third zone. Even if the risk is fairly low, it may be desirable to ensure that vehicles are unable to get too close to the building. Chapter 20: Managing Security Operations 907 ZONE 1 NEIGHBORHOOD ZONE 2 STANDOFF PERIMETER BLDG ZONE 3 SITE ACCESS AND PARKING ZONE 4 SITE ZONE 5 BUILDING ENVELOPE ZONE 6 MANAGEMENT AND BUILDING OPERATIONS Figure 20-2 Security zones around a facility (Source: https://www.wbdg.org/FFC/GSA/ site_security_dg.pdf ) This protects the facility against accidents, but also against explosions. (A good rule of thumb is to ensure there is a 200-foot standoff distance between any vehicles and buildings.) Then there is the rest of the enclosed site (zone 4), which could include break areas for employees, backup power plants, and anything else around the building exte- rior. Finally, there’s the inside of the building, which we’ll discuss later in this chapter. Each of these zones has its own set of requirements, which should be increasingly restric- PART VII tive the closer someone gets to the building. External perimeter security controls are usually put into place to provide one or more of the following services: Control pedestrian and vehicle traffic flows Provide various levels of protection for different security zones Establish buffers and delaying mechanisms to protect against forced entry attempts Limit and control entry points CISSP All-in-One Exam Guide 908 These services can be provided by using the following control types (which are not all-inclusive): Access control mechanisms Locks and keys, an electronic card access system, personnel awareness Physical barriers Fences, gates, walls, doors, windows, protected vents, vehicular barriers Intrusion detection Perimeter sensors, interior sensors, annunciation mechanisms Assessment Guards, surveillance cameras Response Guards, local law enforcement agencies Deterrents Signs, lighting, environmental design Several types of perimeter protection mechanisms and controls can be put into place to protect an organization’s facility, assets, and personnel. They can deter would-be intruders, detect intruders and unusual activities, and provide ways of dealing with these issues when they arise. Perimeter security controls can be natural (hills, rivers) or manmade (fencing, lighting, gates). Landscaping is a mix of the two. In Chapter 10, we explored Crime Prevention Through Environmental Design (CPTED) and how this approach is used to reduce the likelihood of crime. Landscaping is a tool employed in the CPTED method. Sidewalks, bushes, and created paths can point people to the correct entry points, and trees and spiky bushes can be used as natural barriers. These bushes and trees should be placed such that they cannot be used as ladders or accessories to gain unauthorized access to unapproved entry points. Also, there should not be an overwhelming number of trees and bushes, which could provide intruders with places to hide. In the following sections, we look at the manmade components that can work within the landscaping design. Fencing Fencing can be quite an effective physical barrier. Although the presence of a fence may only delay dedicated intruders in their access attempts, it can work as a psychological deterrent by telling the world that your organization is serious about protecting itself. Fencing can provide crowd control and helps control access to entrances and facilities. However, fencing can be costly and unsightly. Many organizations plant bushes or trees in front of the fencing that surrounds their buildings for aesthetics and to make the building less noticeable. But this type of vegetation can damage the fencing over time or negatively affect its integrity. The fencing needs to be properly maintained, because if a company has a sagging, rusted, pathetic fence, it is equivalent to telling the world that the company is not truly serious and disciplined about protection. But a nice, shiny, intimidating fence can send a different message—especially if the fencing is topped with three rungs of barbed wire. When deciding upon the type of fencing, several factors should be considered. For example, when using metal fencing, the gauge of the metal should correlate to the types of physical threats the organization most likely faces. After carrying out the risk analysis (covered in Chapter 2), the physical security team should understand the probability of Chapter 20: Managing Security Operations 909 enemies attempting to cut the fencing, drive through it, or climb over or crawl under it. Understanding these threats will help the team determine the requirements for security fencing. The risk analysis results will also help indicate what height of fencing the organization should implement. Fences come in varying heights, and each height provides a different level of security: Fences three to four feet high only deter casual trespassers. Fences six to seven feet high are considered too high to climb easily. Fences eight feet high (possibly with strands of barbed or razor wire at the top) deter the more determined intruder and clearly demonstrate your organization is serious about protecting its property. The barbed wire on top of fences can be tilted in or out, which also provides extra protection. A prison would have the barbed wire on top of the fencing pointed in, which makes it harder for prisoners to climb and escape. Most organizations would want the barbed wire tilted out, making it harder for someone to climb over the fence and gain access to the premises. Critical areas should have fences at least eight feet high to provide the proper level of protection. The fencing must be taut (not sagging in any areas) and securely connected to the posts. The fencing should not be easily circumvented by pulling up its posts. Fencing: Gauges, Mesh Sizes, and Security The gauge of fence wiring is the thickness of the wires used within the fence mesh. The lower the gauge number, the larger the wire diameter: 11 gauge = 0.0907-inch diameter 9 gauge = 0.1144-inch diameter 6 gauge = 0.162-inch diameter The mesh sizing is the minimum clear distance between the wires. Common mesh sizes are 2 inches, 1 inch, and 3/8 inch. It is more difficult to climb or cut fencing with smaller mesh sizes, and the heavier-gauged wiring is harder to cut. The following list indicates the strength levels of the most common gauge and mesh sizes used in chain-link fencing today: PART VII Extremely high security 3/8-inch mesh, 11 gauge Very high security 1-inch mesh, 9 gauge High security 1-inch mesh, 11 gauge Greater security 2-inch mesh, 6 gauge Normal industrial security 2-inch mesh, 9 gauge CISSP All-in-One Exam Guide 910 PIDAS Fencing Perimeter Intrusion Detection and Assessment System (PIDAS) is a type of fencing that has sensors located on the wire mesh and at the base of the fence. It is used to detect if someone attempts to cut or climb the fence. It has a passive cable vibration sensor that sets off an alarm if an intrusion is detected. PIDAS is very sensitive and can cause many false alarms. The posts should be buried sufficiently deep in the ground and should be secured with concrete to ensure they cannot be dug up or tied to vehicles and extracted. If the ground is soft or uneven, this might provide ways for intruders to slip or dig under the fence. In these situations, the fencing should actually extend into the dirt to thwart these types of attacks. Fences work as “first line of defense” mechanisms. A few other controls can be used also. Strong and secure gates need to be implemented. It does no good to install a highly fortified and expensive fence and then have an unlocked or flimsy gate that allows easy access. Gates basically have four distinct classifications: Class I Residential usage Class II Commercial usage, where general public access is expected; examples include a public parking lot entrance, a gated community, or a self-storage facility Class III Industrial usage, where limited access is expected; an example is a warehouse property entrance not intended to serve the general public Class IV Restricted access; this includes a prison entrance that is monitored either in person or via closed circuitry Each gate classification has its own long list of implementation and maintenance guidelines to ensure the necessary level of protection. These classifications and guidelines are developed by UL (formerly Underwriters Laboratory), a nonprofit organization that tests, inspects, and classifies electronic devices, fire protection equipment, and specific construction materials. This is the group that certifies these different items to ensure they are in compliance with national building codes. A specific UL code, UL 325, deals with garage doors, drapery, gates, and louver and window operators and systems. So, whereas in the information security world we look to NIST for our best practices and industry standards, in the physical security world, we look to UL for the same type of direction. Bollards Bollards usually look like small concrete pillars outside a building. Sometimes companies try to dress them up by putting flowers or lights in them to soften the look of a protected environment. They are placed by the sides of buildings that have the most immediate threat of someone driving a vehicle through the exterior wall. They are usually placed Chapter 20: Managing Security Operations 911 between the facility and a parking lot and/or between the facility and a road that runs close to an exterior wall. An alternative, particularly in more rural environments, is to use very large boulders to surround and protect sensitive sites. They provide the same type of protection that bollards provide. Lighting Many of the items mentioned in this chapter are things people take for granted day in and day out during our usual busy lives. Lighting is certainly one of those items you probably wouldn’t give much thought to, unless it wasn’t there. Unlit (or improperly lit) parking lots and parking garages have invited many attackers to carry out criminal activ- ity that they may not have engaged in otherwise with proper lighting. Breaking into cars, stealing cars, and attacking employees as they leave the office are the more common types of attacks that take place in such situations. A security professional should understand that the right illumination needs to be in place, that no dead spots (unlit areas) should exist between the lights, and that all areas where individuals may walk should be properly lit. A security professional should also understand the various types of lighting available and where they should be used. Wherever an array of lights is used, each light covers its own zone or area. The size of the zone each light covers depends on the illumination of light produced, which usually has a direct relationship to the wattage capacity of the bulbs. In most cases, the higher the lamp’s wattage, the more illumination it produces. It is important that the zones of illumination coverage overlap. For example, if a company has an open parking lot, then light poles must be positioned within the correct distance of each other to eliminate any dead spots. If the lamps that will be used provide a 30-foot radius of illumination, then the light poles should be erected less than 30 feet apart so there is an overlap between the areas of illumination. NOTE Critical areas need to have illumination that reaches at least eight feet with the illumination of two foot-candles. Foot-candle is a unit of measure of the intensity of light. If an organization does not implement the right types of lights and ensure they provide proper coverage, the probability of criminal activity, accidents, and lawsuits increases. Exterior lights that provide protection usually require less illumination intensity than interior working lighting, except for areas that require security personnel to inspect identification credentials for authorization. It is also important to have the correct lighting when using various types of surveillance equipment. The correct contrast PART VII between a potential intruder and background items needs to be provided, which only happens with the correct illumination and placement of lights. If the light is going to bounce off of dark, dirty, or darkly painted surfaces, then more illumination is required for the necessary contrast between people and the environment. If the area has clean concrete and light-colored painted surfaces, then not as much illumination is required. This is because when the same amount of light falls on an object and the surrounding background, an observer must depend on the contrast to tell them apart. CISSP All-in-One Exam Guide 912 When lighting is installed, it should be directed toward areas where potential intruders would most likely be coming from and directed away from the security force posts. For example, lighting should be pointed at gates or exterior access points, and the guard locations should be more in the shadows, or under a lower amount of illumination. This is referred to as glare protection for the security force. If you are familiar with military operations, you might know that when you are approaching a military entry point, there is a fortified guard building with lights pointing toward the oncoming cars. A large sign instructs you to turn off your headlights, so the guards are not temporarily blinded by your lights and have a clear view of anything coming their way. Lights used within the organization’s security perimeter should be directed outward, which keeps the security personnel in relative darkness and allows them to easily view intruders beyond the organization’s perimeter. An array of lights that provides an even amount of illumination across an area is usually referred to as continuous lighting. Examples are the evenly spaced light poles in a parking lot, light fixtures that run across the outside of a building, or a series of fluorescent lights used in parking garages. If an organization’s building is relatively close to someone else’s developed property, a railway, an airport, or a highway, the organization may need to ensure the lighting does not “bleed over” property lines in an obtrusive manner. Thus, the illumination needs to be controlled, which just means the organization should erect lights and use illumination in such a way that it does not blind its neighbors or any passing cars, trains, or planes. You probably are familiar with the special home lighting gadgets that turn certain lights on and off at predetermined times, giving the illusion to potential burglars that a house is occupied even when the residents are away. Organizations can use a similar technology, which is referred to as standby lighting. The security personnel can configure the times that different lights turn on and off, so potential intruders think different areas of the facility are populated. NOTE Redundant or backup lights should be available in case of power failures or emergencies. Special care must be given to understand what type of lighting is needed in different parts of the facility in these types of situations. This lighting may run on generators or battery packs. Responsive area illumination takes place when an IDS detects suspicious activities and turns on the lights within a specific area. When this type of technology is plugged into automated IDS products, there is a high likelihood of false alarms. Instead of continually having to dispatch a security guard to check out these issues, an organization can install a CCTV camera (described in the upcoming section “Visual Recording Devices”) to scan the area for intruders. If intruders want to disrupt the security personnel or decrease the probability of being seen while attempting to enter an organization’s premises or building, they could attempt to turn off the lights or cut power to them. This is why lighting controls and switches should be in protected, locked, and centralized areas. Chapter 20: Managing Security Operations 913 Surveillance Devices Usually, installing fences and lights does not provide the necessary level of protection an organization needs to protect its facility, equipment, and employees. Therefore, an orga- nization needs to ensure that all areas are under surveillance so that security personnel notice improper actions and address them before damage occurs. Surveillance can hap- pen through visual detection or through devices that use sophisticated means of detect- ing abnormal behavior or unwanted conditions. It is important that every organization have a proper mix of lighting, security personnel, IDSs, and surveillance technologies and techniques. Visual Recording Devices Because surveillance is based on sensory perception, surveillance devices usually work in conjunction with guards and other monitoring mechanisms to extend their capabilities and range of perception. A closed-circuit TV (CCTV) system is a commonly used moni- toring device in most organizations, but before purchasing and implementing a CCTV system, you need to consider several items: The purpose of CCTV To detect, assess, and/or identify intruders The type of environment the CCTV camera will work in Internal or external areas The field of view required Large or small area to be monitored Amount of illumination of the environment Lit areas, unlit areas, areas affected by sunlight Integration with other security controls Guards, IDSs, alarm systems The reason you need to consider these items before you purchase a CCTV product is that there are so many different types of cameras, lenses, and monitors that make up the different CCTV products. You must understand what is expected of this physical security control, so that you purchase and implement the right type. CCTVs are made up of cameras, a controller and digital video recording (DVR) system, and a monitor. Remote storage and remote client access are usually added to prevent threat actors (criminals, fire) from destroying the recorded videos and to allow off-duty staff to report to alarms generated by the system without having to drive back to the office. The camera captures the data and transmits it to the controller, which allows the data to be displayed on a local monitor. The data is recorded so that it can be reviewed at a later time if needed. Figure 20-3 shows how multiple cameras can be connected to PART VII one controller, which allows several different areas to be monitored at one time. The controller accepts video feed from all the cameras and interleaves these transmissions over one line to the central monitor. A CCTV sends the captured data from the cameras to the controller using a special network, which can be wired or wireless. The term “closed-circuit” comes from the fact that the very first systems used this special closed network instead of broadcasting the signals over a public network. This network should be encrypted so that an intruder CISSP All-in-One Exam Guide 914 Figure 20-3 Controller/ Cameras Monitor Several cameras DVR can be connected to a DVR that can provide remote storage and Control room access. Remote Internet client Remote storage cannot manipulate the video feed that the security guard is monitoring. The most common type of attack is to replay previous recordings without the security personnel knowing it. For example, if an attacker is able to compromise a company’s CCTV and play the recording from the day before, the security guard would not know an intruder is in the facility carrying out some type of crime. This is one reason why CCTVs should be used in conjunction with intruder detection controls, which we address in the next section. Most of the CCTV cameras in use today employ light-sensitive chips called charged- coupled devices (CCDs). The CCD is an electrical circuit that receives input light from the lens and converts it into an electronic signal, which is then displayed on the monitor. Images are focused through a lens onto the CCD chip surface, which forms the electrical representation of the optical image. It is this technology that allows for the capture of extraordinary detail of objects and precise representation, because it has sensors that work in the infrared range, which extends beyond human perception. The CCD sensor picks up this extra “data” and integrates it into the images shown on the monitor to allow for better granularity and quality in the video. Two main types of lenses are used in CCTV: fixed focal length and zoom (varifocal). The focal length of a lens defines its effectiveness in viewing objects from a horizontal and vertical view. The focal length value relates to the angle of view that can be achieved. Short focal length lenses provide wider-angle views, while long focal length lenses provide a narrower view. The size of the images shown on a monitor, along with the area covered by one camera, is defined by the focal length. For example, if a company implements a CCTV camera in a warehouse, the focal length lens values should be between 2.8 and 4.3 millimeters (mm) so the whole area can be captured. If the company implements another CCTV camera that monitors an entrance, that lens value should be around 8 mm, which allows a smaller area to be monitored. Chapter 20: Managing Security Operations 915 NOTE Fixed focal length lenses are available in various fields of views: wide, medium, and narrow. A lens that provide

Use Quizgecko on...
Browser
Browser