CySA+ Lessons 5-7 (Quiz 2) PDF
Document Details
Uploaded by WellRoundedMorganite6487
Tags
Summary
This document provides an overview of digital forensics techniques, procedures, and analyst roles. It details procedures for identifying, collecting, and analyzing evidence, as well as legal considerations and ethical guidelines for forensic analysts.
Full Transcript
CySA+ Lessons 5-7 (Quiz 2) CySA+ Lesson 5 Part A: Digital Forensics Digital Forensics Analysts Identify digital forensics techniques Analyze network-related IOCs Analyze host-related IOCs Analyze application-related IOCs Analyze lateral movement and pivot IOCs Digital forensics is the...
CySA+ Lessons 5-7 (Quiz 2) CySA+ Lesson 5 Part A: Digital Forensics Digital Forensics Analysts Identify digital forensics techniques Analyze network-related IOCs Analyze host-related IOCs Analyze application-related IOCs Analyze lateral movement and pivot IOCs Digital forensics is the science of collecting evidence from computer systems to a standard that will be accepted in the court of law. Like DNA or fingerprints, digital evidence is mostly latent. (Meaning that the evidence cannot be seen with the naked eye; rather it must be interpreted using a machine process) Cybersecurity analysts may be called upon to work closely with forensic analysts following an incident As part of a CSIRT (Computer Security Incident Response Team) or SOC (Security Operations Center) a forensics analyst may play several roles in following a security incident or in general support of cyber security, such as: ○ Investigating and reconstructing the cause of a cybersecurity incident. ○ Investigating whether any crimes, compliance violations, or inappropriate behavior has occurred. ○ Following forensics procedures to protect evidence that may be needed if a crime has occurred. ○ Determining if sensitive, protected data has been exposed. ○ Contributing to and supporting processes and tools used to protect evidence and ensure compliance. ○ Supporting ongoing audit processes and record maintenance Digital Forensics Procedures Your organization may have legal obligations when investigating a cybersecurity incident. You will have obligations to your organization and its stakeholders to find the underlying cause of the incident. It is important to have written procedures to ensure that you handle forensics properly, effectively, and in compliance with the applicable regulations. Forensics investigation will entail of the following procedures within four general phases: ○ Identification ○ Collection ○ Analysis ○ Reporting Digital Forensics Procedures (Identification) During the Identification phase you want to ensure that the scene is safe. ○ You do not want to be collecting evidence in the middle of an unsafe or active situation Secure the scene to prevent contamination of evidence. Record the scene using video and identify witnesses for an interview. Identify the scope of evidence being collected Digital Forensics Procedures (Collections) Ensure authorization to collect the evidence using tools and methods that will withstand legal scrutiny. Document and prove the integrity of evidence as it is collected and ensure that it is stored in secure, tamper-evident packaging. Digital Forensics Procedures (Analysis) Create a copy of evidence for analysis, ensuring that the copy can be related directly to the primary evidence source. Use repeatable methods and tools to analyze the evidence. Digital Forensics Procedures (Reporting) Create a report of the methods and tools used, and then present findings and conclusions Legal Hold Legal hold refers to the fact that information that may be relevant to a court case must be preserved. Information subject to legal hold might be defined by regulators or industry best practice, or there may be a litigation notice from law enforcement or attorneys pursuing a civil action. This means that computer systems may be taken as evidence, with all the obvious disruption to a network that entails. Forensics Analyst Ethics It is important to note that strong ethics principles must guide forensics analysis. ○ Analysis must be performed without bias. Conclusions and opinions should be formed only from the direct evidence under analysis. ○ Analysis methods must be repeatable by third parties with access to the same evidence. ○ Ideally, the evidence must not be changed or manipulated. If a device used as evidence must be manipulated to facilitate analysis (disabling the lock feature of a mobile phone or preventing remote wipe for example), the reasons for doing so must be sound and the process for doing so must be recorded. Defense counsel may try to use any deviation of good ethical and professional behavior to have the forensics investigators findings dismissed. Data acquisition Data acquisition is the process of obtaining a forensically clean copy of data from a device held as evidence. If the computer system or device is not owned by the organization, there is the question of whether search or seizure is legally valid. This impacts bring-your-own-device (BYOD) policies. For example, if an employee is accused of fraud you must verify that the employee's equipment and data can be legally seized and searched. Any mistake may make evidence gained from the search inadmissible. Data acquisition usually proceeds by using a tool to make an image from the data held on the target device. An image can be acquired from either volatile or nonvolatile storage. The general principle is to capture evidence in the order of volatility, from more volatile to less volatile. The ISOC best practice guide to evidence collection and archiving, published as tools.ietf.org/html/rfc3227, sets out the general order as follows: ○ CPU registers and cache memory (including cache on disk controller, GPUs, etc) ○ Contents of system memory (RAM) including temporary file systems/swap space/ virtual memory ○ Data on persistent mass storage devices (HDDs, SSDs, flash memory devices) ○ Remote logging and monitoring data. ○ Physical configuration and network topology ○ Archival media Order of Volatility System Memory Image Acquisition A system memory dump creates an image file that can be analyzed to identify the processes that are running, the contents of temporary file systems, Registry data, network connections, cryptographic keys, and so on. ○ It can also be a means of accessing data that is encrypted when stored on a mass storage device. System memory contains volatile data. There are various methods of collecting the contents of system memory: ○ Live acquisition ○ Crash dump ○ Hibernation file ○ Page file Live Acquisition Live acquisition involves a specialized hardware or software tool that can capture the contents of memory while the computer is running. Unfortunately, this type of tool needs to be pre-installed as it requires a kernel mode driver to function. Two examples are: ○ Memoryze from FireEye (fireeye.com/services/freeware/memoryze.html) ○ F-Response TACTICAL (f-response.com/software/tac). User mode tools, such as running dd on a Linux host, will only affect a partial capture. Memoryze is free to download, F-Response is not Crash Dump When Windows encounters an unrecoverable kernel error, it can write contents of memory to a dump file. On modern systems, there is unlikely to be a complete dump of all the contents of memory, as these could take up a lot of disk space. However, even mini dump files may be a valuable source of information. Hibernation File This file is created on the disk when the computer is put into a sleep state. If it can be recovered, the data can be decompressed and loaded into a software tool for analysis. The drawback is that network connections will have been closed, and malware may have detected the use of a sleep state and performed anti forensics. Page File This stores pages of memory in use that exceed the capacity of the host's RAM modules. When a system is low on physical memory the OS will write pages of memory to disk to free up space for the current program that demands more RAM (This is known as Virtual Memory) The pagefile is not structured in a way that analysis tools can interpret, but it is possible to search for strings. Page File (Cont’d) Disk Image Acquisition Disk image acquisition refers to nonvolatile storage media, such as hard disk drives (HDDs), solid-state drives (SSDs), and USB flash media drives. To obtain a forensically sound image from non-volatile storage, you need to ensure that nothing you do alters data or metadata (properties) on the source disk or file system. You must also decide how to perform the acquisition: ○ Live Acquisition: Involves copying data while the computer is still running. This may capture more evidence or more data for analysis and reduce impact on overall services, but data on the actual disks will have changed, so this method may not produce legally acceptable evidence. It may also alert the adversary and allow time for them to perform anti-forensics ○ Static acquisition by shutting down the computer. This does run the risk that the malware detects the shutdown process and performs anti-forensics to try to remove traces of itself. ○ Static acquisition by pulling the plug: This means disconnecting the power at the wall socket (not the hardware power off button). This most likely preserves the storage devices in a forensically clean state, but there is risk of data corruption. Whichever method is used, it is imperative to document the steps taken and supply a timeline for your actions Write Blockers Write blockers assure that the image acquisition tools that you use do not contaminate the source disc. Write blockers prevent any data on the disk or volume from being changed by filtering write commands at the firmware driver level. ○ MOUNTING A DRIVE AS READ-ONLY WITHIN A HOST OS IS NOT SUFFICIENT. DO NOT TAKE CHANCES. A write blocker can be implemented as a hardware device or as software running on the forensics workstation. ○ For example CRU Forensic UltraDock write blocker appliance supports ports for all main host and drive adapter types. Hashing A critical step in the presentation of evidence will be to prove that analysis has been performed on an image that is identical to the data present on the physical media and that neither data set has been tampered with. ○ The standard means of proving this is to create a cryptographic hash or fingerprint of the disk contents and any derivative images made from it. ○ When comparing hash values, you need to use the same algorithm that was used to create the reference value. Two hashing algorithms that we’ll go over are: ○ Secure Hash Algorithm (SHA) ○ Message Digest Algorithm (MDA/MD5) Message Digest Algorithm (MDA/MD5) Designed in 1990. Updated to MD5 which was released in 1991 with a 12-bit hash value. No longer secure Despite lack of security, many forensics tools default to using MD5 as it is a bit faster than SHA, it offers better compatibility between tools, and the chances of an adversary exploiting a collision in that context are more remote. Secure Hash Algorithm (SHA) SHA is one of the Federal Information Processing Standards (FIPS) developed by NIST for the US government. SHA was created to address weaknesses in MDA. There are two versions of the standard in common use: SHA-1 was released in 1995 to address the weaknesses of the original SHA algorithm. Uses 160-bit digest. Also had its own weaknesses and is no longer used SHA-2 Addresses the weaknesses of SHA-1 and uses longer bit digests (256 and 512 bits) Carving When you delete a file you're not really deleting the file, you are simply deleting the metadata that tells the OS where that file is located. That tells the OS that the block that the file is contained in is open and free to be written over. (This is called slack space) File carving is the process of extracting data from an image when that data has no associated file system metadata. ○ A file-carving tool analyzes the disk at sector/page level and attempts to piece together data fragments from unallocated and slack space to reconstruct deleted files, or at least bits of information from deleted files. Some file carving tools include: EnCase, FTK, and Autopsy ○ Scalpel is an open source command line tool that can be used to conduct file carving. Chain of Custody The chain of custody is a record of evidence handling from collection through presentation in court. Evidence can be hardware components, electronic data, or telephone systems. The chain of custody documentation reinforces the integrity and proper custody of evidence from collection, to analysis, to storage, and finally to presentation. When security breaches go to trial, the chain of custody protects an organization against accusations that evidence has either been tampered with or is different than it was when it was collected. Every person in the chain who handles evidence must log the methods and tools they used. Physical devices taken from the crime scene should be identified, bagged, sealed,and labeled. Tamper-proof (AKA "tamper-evident") bags cannot be opened and then resealed covertly. It is also appropriate to ensure that the bags have anti-static shielding to reduce the possibility that data will be damaged or corrupted on the electronic media by electrostatic discharge (ESD). Criminal cases or internal security audits can take months or years to resolve. You must be able to preserve all the gathered evidence in a proper manner for a lengthy period. Computer hardware is prone to wear and tear, and important storage media like hard disks can even fail when used normally, or when not used at all. A failure of this kind may mean the corruption or loss of your evidence, both of which may have severe repercussions for your investigation. Faraday bags CySA+ Lesson 5B Analyze Network-related IOCs This section relates also to objective 4.3: ○ Given an incident analyze potential indicators of compromise Analysts must be able to identify and explain what IOCs there are in network environments as well as possible mitigation strategies to such IOCs. Traffic Spikes and DDOs IoCs DDoS attacks are often devastating to availability because even the largest and most well-defended networks can be overwhelmed by the sheer volume and distribution of malicious traffic An unexpected surge in traffic from internet hosts could be a sign of an ongoing DDoS attack A traffic spike is an increase in the number of connections that a server, router, or load balancer is processing. Traffic statistics can reveal indicators of DDoS-type attacks, but you must establish baseline values for comparison. Traffic spikes are diagnosed by comparing the number of connection requests to a baseline. Bandwidth Consumption and DRDoS Bandwidth consumption can either be measured as the value of bytes sent or received or as a percentage of the link utilization. In a Distributed Reflection DOS (DRDoS) (or amplification attack) the adversary spoofs the victims IP address and tries to open connections with multiple servers. Those servers direct their SYN/ACK response to the victim server. This rapidly consumes the victims available bandwidth. DDoS Mitigation While load balancers and IP filter can offer protection from DDoS attacks to an extent these techniques can easily be overwhelmed by the large and distributed nature of botnets that are usually used in modern DDoS attacks. The key to repelling a sustained attack lies in real-time analysis of log files to identify the pattern of suspicious traffic and redirecting the malicious traffic to a blackhole or sinkhole. Beaconing Intrusion IOCs Command and control (C&C/ C2) refers to an infrastructure of hosts with which attackers direct, distribute, and control malware. ○ This is made possible through the utilization of a botnet After compromising systems and turning them into zombies, the attacker adds these systems to a pool of resources. The attacker then issues commands to the resources in this pool. A command can be something as simple as a ping (or “heartbeat”) to verify that the bot is still alive in the botnet. (known as beaconing) Another command can be to infect hosts that the bot is connected to in a network. A bot may beacon its C2 server by sending simple transmissions at regular intervals to unrecognized or malicious domains. Irregular Peer-to-peer (P2P) traffic in the network could indicate that a bot is communicating with a centralized C2 server. Hosts in the C2 network are difficult to pin down because they frequently change DNS names and IP addresses. Beaconing activity is detected by capturing metadata about all the session established or attempted and analyzing it for patterns that constitute suspicious activity One problem though is that many legitimate applications also perform beaconing (NTP servers, autoupdate systems, cluster services, etc). So indicators would have to include known bad IP addresses, rate and timing of attempts and the size of response packets. (when in doubt just google it.) Command and Control methods Many different platforms and methods can be utilized in the C&C process: ○ Internet Relay Chat (IRC) ○ HTTP and HTTPS ○ DNS ○ Social media websites ○ Cloud services ○ Media and document files Internet Relay Chat (IRC) Internet Relay Chat is a group communication protocol. IRC networks are divided into discrete channels, which are the individual forums used by clients to chat. With IRC it is easy for an attacker to set up an IRC server and begin sending interactive control directives to individual bits connected to the IRC server IRC traffic is relatively easy for admins to detect and many organizations have no used for this protocol and thus block all communications from it. IRC in general is on the decline so use of IRC as a C2 channel is no longer widely used. HTTP and HTTPS Unlike IRC, communication over HTTP and HTTPS is still widely used in almost every organizational network. Blocking these protocols entirely may not be a viable option, and could cause major disruption to organizational productivity and processes. HTTP and HTTPS is a more viable option as a C2 channel as well because it can be difficult to separate malicious traffic from legitimate traffic. When used in C&C, the attacker embeds commands encoded within HTML to multiple web servers as a way to communicate with its bots. (These commands and responses can be encrypted) One way to mitigate this is to use an intercepting proxy at the network edge. ○ The proxy can decrypt and inspect all traffic, and only re-encrypt and forward legitimate requests and responses. DNS Because DNS traffic is not inspected or filtered in most private networks, attackers see an opportunity for their control messages to evade detection. DNS and C&C channel is effective because the bot doesn't even need to have a direct connection to outside the network. All it needs to do is connect to a local DNS resolver that executes lookups on authoritative servers outside the organization, and it can receive a response with a control message Social Media Websites Facebook, Twitter, and LinkedIn can be (And have been) vectors for C&C operations Social media platforms like these are a way to “Live off the land”, issuing commands through the platforms messaging functionality or their account profiles. ○ For example many business implicitly trust LinkedIn ○ An attacker could set up an account and issue commands to bots through the accounts profile, using fields like employment status, employment history, status updates and more. There is evidence that a C&C operation used Twitter accounts to post seemingly random hashtags. These hashtags were encoded command string, and bots would scour Twitter messages for these hashtags to receive their orders ○ intego.com/mac-security-blog/flashback-mac-malware-uses-twitter-as-command-and-control-center Cloud Services Cloud companies that supply a wide variety of services, especially infrastructure and platform services are also at risk of being a C&C vector. Attackers used Google’s App Engine platform to send C&C messages to booths through a custom application hosted by the service. App engine is attractive to attackers because it offers free, limited access to the service. ○ Instead of incurring the cost of setting up their own servers, attackers use a cloud company’s reliable and scalable infrastructure to support their C&C operations. Media and Document FIles Media file formats like JPEG, MP3 and MPEG use metadata to describe images, audio and video. An attacker could embed control messages inside this metadata then send the media file to its bots over any number of communication channels that support media sharing. Because monitoring systems do not typically look at media metadata, the attacker may be able to evade detection. Rogue Devices on a Network A rogue device is any unauthorized piece of electronic equipment that is attached to a network or assets in an organization. A USB thumb drive may be attached to a web server to siphon sensitive data. An extra NIC or Wi-Fi adapter may be installed on an employee's workstation to create a side channel for an attack. An employee's personal smartphone may be connected to the network, exposing the network to malware. The risk from rogue devices is a major reason why you should have an inventory of all devices in your organization. Rogue Devices on a Network (Cont’d) Rogue system detection refers to a process of identifying (and removing) machines on the networks that are not supposed to be there. You should be aware that "system" could mean several distinct types of device (and software): ○ Network Taps - a physical device might be attached to cabling to record packets passing over that segment. Once attached, taps cannot usually be detected from other devices inline with the network, so physical inspection of the cabled infrastructure is necessary ○ WAPS - While there are dedicated rogue WAPS such as WiFi pineapple (distributed by hak5), anyone with access to your network can create a WAP, even from a phone or laptop. In some cases this can be used to mislead others into connecting to their rogue access point, which then opens the door for a MITM attack. ○ Servers - an adversary may also try to set up a server as a malicious honeypot to harvest network credentials or other information. ○ Wired/Wireless Clients - End users can knowingly or unknowingly introduce malware to your network. They could also perform network reconnaissance or data exfiltration. Software - Rogue servers and applications, such as malicious DHCP or DNS servers, may be installed covertly on authorized hardware. Smart appliances - Devices such as printers, webcams, and VoIP handsets have all suffered from exploitable vulnerabilities in their firmware If use of these assets are not tracked and monitored, they could represent potential vectors for an adversary. Rogue machine detection There are several techniques available to perform rogue machine detection: ○ Visual inspection of ports/switches ○ Network mapping/host discovery - enumeration scanners should identify hosts via banner grabbing/fingerprinting. Finding rogue hosts on a large network from a scan may still be difficult ○ Wireless monitoring - Discover unknown or unidentifiable service set identifiers (SSIDs) showing up within range of the office ○ Packet sniffing - Reveal the use of unauthorized protocols on the network and unusual peer-to-peer communication flows ○ NAC and IDS - Security suites and appliances can combine automated network scanning with defense and remediation suites to try to prevent rogue devices accessing the network. Data exfiltration IoCs Usually access is not the end goal of the attacker; they most likely also want to steal sensitive data from your organization. The malicious transfer of data from one system to another is called data exfiltration. Data exfiltration can be performed over many different types of network channels. You will be looking for unusual endpoints or unusual traffic patterns in any of the following: ○ HTTP/HTTPS ○ DNS ○ FTP, IM, P2P, email ○ Explicit Tunnels such as SSH or VPNS HTTP Data Exfiltration HTTP (or HTTPS) transfers to consumer file sharing sites or unknown domains. One typical approach is for an adversary to compromise consumer file sharing accounts (OneDrive, Dropbox, or Google Drive for instance) and use them to receive the exfiltrated data. The more the organization allows file sharing with external cloud services, the more channels they open that an attacker can use to exfiltrate critical information. If the data loss systems detect a sensitive file outbound for Dropbox, for example, they may allow it to pass. Those systems won't necessarily be able to discern legitimate from illegitimate use of a single file. So, an attacker doesn't even need to have access to the employees' official Dropbox share—the attacker can open their own share, drop the files in, and then the data is leaked. HTTP requests to database-backed services. An adversary may use SQL injection or similar techniques to copy records from the database that they should not normally have access to. Injection attempts can be detected by web application firewalls (WAF). Other indicators of injection-style attacks are spikes in requests to PHP files or other scripts, and unusually large HTTP response packets. CySA+ Lesson 6A: Explain Incident Response Processes Objectives: Explain the importance of the incident response process Given a scenario, apply the appropriate incident response procedure. Incident Response Phases Incident response procedures are the guideline based actions taken to deal with security related events. The incident would be the breach or suspected breach that will be responded to by either a CSIRT (Computer Security Incident Response Team) or a CERT (Computer Emergency Response Team) NIST Computer Security Incident handling Guide identifies the following stages in an incident response life cycle: ○ Preparation ○ Detection and Analysis ○ Containment ○ Eradication and Recovery ○ Post-Incident Activity Incident Response: Preparation During this phase you will want to make your system resilient against attacks before the attacks can even happen. Aside from the technical defences that this involves, this phase will include creating policies and procedures for employees to adhere to, as well as setting up confidential lines of communication that are separate from your internal networks communication systems. Incident Response: Detection and Analysis You want to be able to detect if an incident has taken place and be able to determine how severe the damage incurred by the intrusion is. The act of determining which parts of your network have been affected more severely than others, and which parts are more important than others, thus requiring a more urgent response, is called “triage” Incident Response: Detection and Analysis (Cont’d) Incident Response: Containment This step involves limiting the scope and magnitude of the incident. You want to limit the impact on employees, customers and business partners. You will also want to ensure that you are securing data from further breaches. Incident Response: Eradication and Recovery Once the incident is contained (meaning the problem is no longer spreading or affecting the system) You can then move to eradicate the problem, and return the system back to normal. This phase may have to be returned to more than once as you may detect more issues as you go through your system eradicating them. Incident Response: Eradication and Recovery (Cont’d) Incident Response: Post-Incident Activity You want to ensure that you document everything that the incident involves. ○ What led up to the incident? ○ What caused it to be detected? ○ What remediation methods were used? ○ What worked and what did not work? ○ What could have been done to avoid the incident from occurring? This phase is often referred to as the “Lessons Learned” phase After this phase you return to preparation once again to ensure that you use your lessons learned to make your system more resilient to attacks in the future. Data Criticality and Prioritization It is important to know how to allocate resources effectively and efficiently. This involves assessing severity and prioritization of affected systems. This triage process takes into account a multitude of factors. Most importantly is whether or not a breach revolves around data that deals with private or confidential information. Some of these categories of confidential information are: ○ PII ○ SPI ○ PHI ○ Financial Information ○ Intellectual property ○ HVA ○ Corporate Information PII (Personally Identifiable Information) Just like how it sounds, PII involves data that can be used to identify an individual. This includes data such as: ○ A subjects full name ○ A subjects birth date ○ A subjects birth place ○ A subjects address ○ Subjects Social Security Number ○ Other unique identifiers SPI (Sensitive Personal Information) This is privacy sensitive information about a subject, but it is NOT identifying information. This is anything about a subject that could be harmful to them if made public. Data that falls under the umbrella of SPI is: ○ Religious beliefs ○ Political opinions ○ Trade union membership ○ Gender ○ Sexual orientation ○ Racial or ethnic origin ○ Health information PHI (Personal Health Information) Refers to medical records relating to a subject. Due to the nature of PHI a data leak involving PHI can be especially devastating since unlike a credit card number or bank account number it cannot be changed. A person's medical history is permanently a part of them. ○ This is why PHI is often a target of perpetrators of insurance fraud and black mail. In some cases reputational damage can be caused by a leak of PHI Financial Information This refers to data held about bank and investment accounts, as well as tax returns. IP (Intellectual Property) This is information created by an individual or company that usually involves the creation of products or services that they perform. IP can include: ○ Copyright works ○ Patents ○ Trademarks IP would likely be a target of a business competitor or counterfeiter. Corporate information This data involves information that is essential to how a business functions. This information includes: ○ Product development, production and maintenance ○ Customer contact and billing ○ Financial operations and controls ○ Legal obligations to keep accurate records to a certain time frame ○ Contractual obligation to third parties If a company is planning to merge with another. This information, if leaked with the goal of stock market manipulation could have a major impact on all of the parties involved. HVA (High Value Assets) A high value asset is a critical information system. This critical system will likely be relied upon for essential functions of the organization to continue. If any aspect of the the CIA triad were to be listed with this system there would likely be major consequences. HVAs must be easily identifiable by a response team, and any incident involving an HVA must be considered high priority. Reporting requirements This describes the requirement of notifying external parties when faced with certain types of incidents: ○ Data Exfiltration: An attacker breaks in and transfers out data to another system. (Note that even suspected breaches are treated the same as confirmed breaches) ○ Insider Data exfiltration: Someone inside the company perpetrates the data exfiltration ○ Device theft/loss: Device is lost or stolen ○ Accidental data breach: Human error or misconfiguration leads to a data breach (such as sending confidential information to the wrong person) ○ Integrity/Availability: Attacks that mainly target a service that is needed to be up at all times. Or an attack that corrupts essential data Response coordination Incident response may involve coordination between different departments depending on the type of situation that is being faced. ○ Senior leadership will be contacted so that they can consider how the incident affects their departments and interior functions. They can then make decisions accordingly. ○ Regulatory bodies will need to be contacted by companies that fall within the industry that they oversee ○ Law enforcement agencies may also need to be reached out to if there are legal implications caused by the attack. A legal officer will be the person to communicate with law enforcement if it is required ○ Human resources (HR) will be contacted if an insider threat is suspected of being the perpetrator of an attack so that no breaches of employment law or employment contracts are made. ○ Public Relations (PR) any attacks that could cause the public to have a negative image of the company will require that PR get involved to handle damage control of the organization's public image. CySA+ Lesson 6B: Apply Detections and Containment Processes The OODA Loop A model Developed by the US military strategist Colonel John Boyd. ○ The Order of operations of this OODA Loop is: Observe Orient Decide Act The goal of the OODA loop is to prevent a purely reactionary response to an attack and take initiative with a measured, well thought out response. OODA: Observe You need information about the network and the specific incident and a means of filtering and selecting the appropriate data. In this step you identify the problem or threat and gain an understanding of the internal and external environment to the best of your ability. OODA: Orient Determine what the situation involves. Have you been alerted to the attack as it was happening, or are you just noticing an attack that has been occurring for some time? What does the attacker want? ○ Information? ○ To bring your system down? ○ To corrupt your stored data? What resources do the attackers have at their disposal? ○ Is it a script kiddie using a tool they bought off the internet or an APT that has created an attack to target you specifically. OODA: Decide What options do you have for countermeasures? What are your goals? Can we prevent a data breach from happening or should we focus on gathering forensic evidence to try to prosecute later? By having a decisive action that needs to be taken in mind, this can prevent you from losing time on decision making in the future. It can also prevent you from getting caught up in the phase of analysing the attack. (Analysis Paralysis) OODA: Act This is where you take the actions to remediate the situation as quickly as possible. Similar to the incident response phases, from here you may need to start the loop again until the situation is completely resolved. Lockheed Martin IDD The Lockheed Martin white paper for Intel Driven Defence defines “defensive capabilities” as the ability to: Detect—Identify the presence of an adversary and the resources at his or her disposal. Destroy—Render an adversary's resources permanently useless or ineffective. Degrade—Reduce an adversary's capabilities or functionality, (sometimes this can only be done temporarily). Disrupt—Interrupt an adversary's communications or frustrate or confuse their efforts. Deny—Prevent an adversary from learning about your capabilities or accessing your information assets. Deceive—Supply false information to distort the adversary's understanding and awareness. Impact analysis Damage incurred in an incident can have wide-reaching consequences, Including: ○ Damage to data integrity and information system resources ○ Unauthorized changes and configuration of data or information systems ○ Theft of data or resources. ○ Disclosure of confidential or sensitive data ○ Interruption of services and system downtime. After the detection of an incident, triage is used to classify the security level and classification of the incident. Incident Security Level Classification Having classifications of security breaches can allow you to refine your quality of your Response. ○ Data Integrity ○ System process criticality ○ Downtime ○ Economic ○ Data correlation ○ Reverse engineering ○ Recovery time ○ Detection time Containment It is important to get past the analysis phase into a phase where you move to actually contain the breach. Especially since it's likely that the breach may have been going on for quite some time before it was even detected. ○ 20% of reported breaches take days to discover ○ 40% took months to discover ○ Only 10% of reported breaches were discovered within an hour When it is noticed (or suspected) that a system is compromised, you must move quickly to identify the correct containment technique. Your course of action will depend on and require several factors: ○ Safety and security of personnel ○ Preventing the intrusion from continuing ○ Identify if the intrusion is a primary or secondary one ○ Avoid alerting the attacker to the fact that the intrusion was discovered. ○ Preserve forensic evidence Isolation-Based Containment This method of containment involves removing an affected component from whatever larger environment it is a part of. This will disallow the affected system from being able to interface with the network, the other devices within it, and the internet as well. ○ This is achieved simply by pulling the network plug and ensuring that all wireless functionality is disabled. ○ You can also disable the switch port that the devices is using Segmentation-based Containment Segmentation uses VLANs, routing/subnets and firewall ACLs to prevent a host or group of hosts from communicating outside of a protected segment. Segmentation can be used for honeypot/honeynets to analyse an attackers traffic as it is redirected to said honeypot. This method can allow you to analyse the attack as it runs, but is not allowed to reach the intended target within your network. In most cases the attacker will assume that the attack is continuing successfully. Course of Action (COA) Matrix The CoA matrix is a table that shows the defensive capabilities available at each phase of the cyber kill chain Course of Action (COA) Matrix (Cont’d) CySA+ Lesson 6C: Apply Eradication, Recovery and Post-Incident Processes It is the duty of a CySA professional to suggest steps that can be taken during the latter part of the incident response cycle. A checklist can be created to help with analysing and developing insight into how to navigate your way through the eradication/recovery, and post-incident response phases Eradication After the previous steps have been completed (identify, analyze, contain) you can move on to the mitigation and eradication of the problem from your systems. To give an example, if there is a worm that has infected a system or network that you have now isolated, you can proceed to work on removing the worm from your system. If you do not isolate the system first you may be trying to remove something that is in the process of continuing to spread. Sanitization and Secure disposal If a contaminated system is deemed not worth the effort of fixing the problem, you may decide to simply destroy the affected system or drive and replace it with a back up. ○ For example you may opt to discard infected drives if you know that you have clean backups that can immediately be used to replace them Even in cases like this you want to ensure that any confidential data can not be gleaned from the discarded drives. Data/Drive sanitization is the process of irreversibly removing or destroying data stored on a memory device (ie: nonvolatile memory such as HHD, SSD, Flash Memory, CDs and DVDs) (CE) Cryptographic Erasure Cryptographic erasure is a method that is used with SEDs (Self Encrypting Drives) when you want the data on the drive to no longer be accessible. Everything that is written to an SED is encrypted automatically. If you no longer want that drive to be readable you delete the cryptographic key that allows access to the drive. While the data is still on the drive, it is encrypted and unrecoverable. Zero-fill If a cryptographic erase is not an option that can be used, you can also opt to use another sanitization method called Zero-fill Zero filling is a method of formatting a hard disk where the formatter wipes the disk contents by overwriting them with zero. Each bit present in the disk is replaced with a value of zero. Once the data is overwritten with zeros, the process cannot be undone from the hard drive. ○ This can hinder forensic tools from reconstructing any meaningful data from the drive Zero-fill is more useful for HDDs than SSDs due to the fact that a HDD and SSD communicate in different ways which locations are available for storing data. In the case of SSDs some may opt to instead use a SE (Secure Erase) utility like “Disk Wipe” or Heidi Computers’ “Eraser”. Reconstruction and Reimaging After a sanitization process, if you want to reuse the disk after it has been wiped you can reimage the host disk using a known clean backup that was created prior to the incident. Another option is to reconstruct a system using a configuration template or scripted install from trusted media Reconstitution of Resources Reconstitution is a method used when reimaging is not possible for your situation (perhaps due to the lack of access to backups). You are rebuilding your system and regaining or reconstructing what was lost. Reconstitution involves the following steps: ○ Analyzing the system for signs of malware after the system has been contained ○ Terminate suspicious processes and securely delete them from the file system ○ Identify and disable autostart locations in the file system, Registry, and task scheduler ○ Replace contaminated OS and application processes with clean version from a trusted source ○ Reboot the system and analyze for signs of continued malware infection ○ If there is continued infection and you can't identify its source in the file system investigate if either the firmware or a USB device being used has been infected ○ Once the systems tests for malware are negative, you can return it to its role in the system while continuing to monitor it closely to ensure that it does not portray any additional suspicious activity Recovery After the malware has been removed and the system has been brought back from its infection, you may also have to restore some lost capabilities, services, or resources. The following are some examples of incident recovery: ○ If data is maliciously deleted from a database you can restore the data if you have been creating backups prior to the attack. ○ If a DDoS takes down your web servers, you may need to manually reboot your servers and perform a health check on them before bringing them back up for public usage ○ If an employee inadvertently introduces malware into your system through their workstation you can attempt remedial measures through use of an antimalware software. If the problem persists you may have to wipe the entire drive and reinstall the OS. With recovery you want to ensure that the system is no longer vulnerable to the same attack that affected it before. Patching In the case that an attack was performed using a software or firmware exploit, the system will need to be patched (This of course assumes that a patch for the vulnerability exists in the first place which may not always be the case). You may also want to take time to determine why the system was not patched beforehand. ○ In some cases systems may need to be configured to automatically install updates and patches. Misconfiguration or human error may lead to this setting not being toggled on, or accidentally toggled off. If no patch is available you need to apply different mitigation methods, such as monitoring of the system or network segmentation. Restoration of Permissions After an incident you may need to check if an attacker was able to elevate permissions for themselves, or remove permission of those who had higher permissions in the system After the malware has been removed, the vulnerability has been patched, and system restored, you may also need to review the permissions that are on the system and ensure that they are as they should be. If not, you will have to go through and reinforce the proper permissions and privileges. Verification of Logging If an attacker can affect and disable your logging systems then they will do so. After an incident you want to ensure that your scanning, monitoring and logging systems are functioning properly following the incident, and that they have not been disabled or somehow subverted by the attacker. Vulnerability mitigation and system hardening Reducing the attack surface is a MAJOR part of system hardening. Simply put, reducing the attack surface means removing unnecessary functions or services that are not in use by the organization so that they can not be used as an attack vector by a malicious entity. Here are some examples: ○ Disable unused user accounts. ○ Deactivating/removing unnecessary components, including hardware, software, network ports, applications, operating system processes and services. ○ Implement patch management software that will allow you to test updates before deployment into your systems ○ Restrict access to USB and bluetooth on devices ○ Restrict shell commands and command line usage and functionality based on the privileges that a user should have Post-incident activities After a threat has been eradicated and the system restored there are some post incident activities that must be completed. Activities like report writing, evidence retention, and updates to response plans can help you to see what went wrong in this situation and what you can do in the future to prevent such an event from occurring again. Report writing This involves explaining all of the relevant details of a threat event, potential threat event and the effects that it can have on the organization It is important to note that the people reading these reports may not have the same technical knowledge as you do, so you must be able to explain technical subject matter in a non technical way. This way any executive decisions that have to be made can be made based on them understanding what your report entailed. Reports should: ○ Show the date of its composition as well as the names of the authors ○ Be marked confidential or sacred (based on what the requirements of said information are) ○ Start with an executive summary for the target audience, letting them know the TLDR of what the report is going to be about and what sections of the report go into which topics. (Especially useful for longer reports). ○ Points should be made directly, coherently, and concisely. (Don't beat around the bush). Evidence Retention If an attack has any legal or regulatory impact, evidence of the incident must be preserved Different regulatory bodies have different requirements for how long certain evidence must be retained If an organization wishes to pursue civil or criminal prosecution of the incidents perpetrators, the evidence must be collected and stored using the proper forensics procedures. Lessons Learned Report This post incident activity reviews the incident in an attempt to clearly identify how, if at all possible, the incident could have been avoided. Lessons learned reports can be structured to use the “six questions” method: ○ Who was the adversary? ○ Why was the incident perpetrated? ○ When did the incident occur/ when was it detected? ○ Where did the incident occur/ What was targeted? ○ How did the incident occur/ What methods did the attacker use? ○ What security controls would have provided better mitigation or improved the response? Change Control Process Corrective action and remediating controls need to be introduced in a planned way, following the organization's change control process. Change control validates the compatibility and functionality of the new system and ensures that the update or installation process has minimal impact on business functions CySA+ Lesson 7 Part A Objective 5.2 Given a scenario, apply security concepts in support of organizational risk mitigation Apply Risk Identification, calculation, and prioritization Process As cybersecurity professionals, you have the responsibility to identify risks and protect your systems from them. “Risk” is referring to the measure of your exposure to the chance of damage or loss. It signifies the likelihood of a hazard or dangerous threat to occur. Risk is often associated with the loss of a system, power, or network, as well as other physical losses. Risk can also affect people, practices and processes Risk Identification Process Enterprise Risk Management (ERM) is defined as the comprehensive process of evaluating, measuring, and mitigating the many risks that can affect an organization's productivity. Due to the modern era's heavy focus on information and interconnected systems across the world, ERM responsibilities is a responsibility taken on by the IT department. There are quite a few reasons for the adoption of ERM some examples include: ○ Keeping confidential customer information out of the hands of unauthorized parties ○ Keeping trade secrets out of the public sphere ○ Avoiding financial loss due to damaged resources ○ Avoiding legal trouble ○ Maintaining a positive public perception of the enterprises brand image ○ Establishing trust and liability in a business relationship ○ Meeting stakeholders’ objectives As a Cyber security analyst you may not be able to make all of the decisions regarding risk management. Such decisions may be made by business stakeholders or project management teams. However you may be in the position to understand the technical risks that exist and may need to bring them to the attention of decision makers. Risk Management Framework NIST’s Managing Information Security Risk special publication is one starting point for applying risk identification and assessment. NIST identifies four structural components or functions: ○ Frame ○ Assess ○ Respond ○ Monitor Frame Frame—Establish a strategic risk management framework, supported by decision makers at the top tier of the organization. The risk frame sets an overall goal for the degree of risk that can be tolerated and demarcates responsibilities. The risk frame directs and receives inputs from all other processes and should be updated as changes to the business and security landscape arise. Assess Identify and prioritize business processes/workflows. Perform a systems assessment to determine which IT assets and procedures support these workflows. Identify risks to which information systems and therefore the business processes they support are exposed. Respond Mitigate each risk factor through the deployment of managerial, operational, and technical security controls. Monitor Monitor—Evaluate the effectiveness of risk response measures and identify changes that could affect risk management processes. System Assessment Businesses don't make money off of just being secure. Their security is merely a means to protect the assets of the company that allow the company to run their business operations smoothly Assets must be valued according to the liability that the loss or damage of the asset would create: ○ Business continuity - refers to the losses from no longer being able to fulfill contracts and orders due to the breakdown of critical systems. ○ Legal - These are responsibilities in civil and criminal law. Security incidents could make an organization liable to prosecution (criminal law ) or for damages (civil law). ○ Reputational - These are losses due to negative publicity, and consequent loss of market position and customer trust. System Assessment Process In order to design a risk of assessment process, you must know what you want to make secure through a process of system assessments. This means compiling an inventory of its business processes and for each process, the tangible and intangible assets and resources that support them. These could include the following: ○ People - Employees, visitors, and suppliers. ○ Tangible assets - Buildings, furniture, equipment, machinery, electronic data files and paper documentation ○ Intangible assets - Ideas, commercial reputation, brand, and so on. ○ Procedures - supply chains, critical procedures, standard operating procedures, and workflows. Mission Essential Function Mission Essential Functions (MEF) are sets of an organization's functions that must be continued throughout, resumed rapidly after a disruption of normal operations The organization must be able to perform the functions as closely as possible. If there is a service disruption, the mission essential functions must be restored first. Asset/Inventory Tracking There are many software suites and associated hardware solutions available for tracking and managing assets (or inventory). An asset management database can be configured to store as much or as little information as is deemed necessary, though typical data would be type, model, serial number, asset ID, location, user(s), value, and service information. Threat and Vulnerability Assessment Each asset and process must be assessed for vulnerabilities and their consequent exposure to threats. This process of assessment will be ongoing, as systems are updated, and as new threats emerge. The range of information assets will highlight the requirements needed to secure those assets. For example, you might operate a mix of host OSs (Windows, Linux, and macOS, for instance) or use a single platform. You might develop software hosted on your own servers or use a cloud platform. The more complex your assets are, the more vulnerability management processes you will need to have in place. Risk Calculation When determining how to protect computer networks, computer installations, and information, risk calculation or analysis is a process within the overall risk management framework to assess the probability (or likelihood) and magnitude (or impact) of each threat scenario. Calculating risk is complex in practice, but the method can be simply stated as finding the product of these two factors: ○ Probability - The chance of the threat being realized ○ Magnitude - the overall impact that a successful attack or exploit would have. *Probability x Magnitude = RISK* Quantitative Risk Calculation A quantitative assessment of risk attempts to assign concrete values to the elements of risk. Probability is expressed as a percentage and magnitude as a cost (monetary) value, as shown in the following formula: EF is the percentage of the assets value that would be lost. The single loss expectancy (SLE) value is the financial loss that is expected from a specific adverse event. For example, if the asset value is $50,000 and the EF is 5%, the SLE calculator is as follows: If you know or can estimate how many times this loss is likely to occur in a year, you can calculator the risk cost on an annual basis: The problem with quantitative risk assessment is that the process of determining and assigning these values can be complex and time consuming. The accuracy of the values assigned is difficult to determine without historical data (often, it must be based on subjective guesswork). However, over time and with experience this approach can yield a detailed and sophisticated description of assets and risks and provide a sound basis for justifying and prioritizing security expenditure. Qualitative analysis is generally scenario-based. The qualitative approach seeks out people's opinions of which risk factors are significant. Qualitative analysis uses simple labels and categories to measure the likelihood and impact of risk. ○ For example, impact ratings can be severe/high, moderate/medium, or low; and likelihood ratings can be likely, unlikely, or rare. Another simple approach is the “Traffic Light” Impact grid. For each risk, a simple Red, Amber or Green indicator can be put into each column to represent the severity of the risk, its likelihood, cost controls and so on. This simple approach can give a quick impression of where efforts should be concentrated to improve security. Business Impact Analysis Business impact analysis (BIA) is the process of assessing what losses might occur for each threat scenario. ○ For instance, if a roadway bridge crossing a local river is washed out by a flood and employees are unable to reach a business facility for five days, estimated costs to the organization need to be assessed for lost manpower and production. Impacts can be categorized in several ways, such as impacts on life and safety, impacts on finance and reputation, and impacts on privacy. BIA is governed by metrics that express system availability. These include: ○ Maximum Tolerable Downtime (MTD) ○ Recovery Time Objective (RTO) ○ Work Recovery Time (WRT) ○ Recovery Point Objective (RPO) Maximum Tolerable Downtime MTD is the longest period that a business function outage may occur without causing irrecoverable business failure. Each business process can have its own MTD, such as a range of minutes to hours for critical functions, 24 hours for urgent functions, 7 days for normal functions and so on. MTDs Vary by company and event. Each Function may be supported by multiple system assets. The MTD sets the upper limit on the amount of recovery time that system and asset owners have to resume operations. ○ i.e. an organization specializing in medical equipment may be able to exist without incoming manufacturing supplies for three months because it has stockpiled inventory. After three months the organization will not have enough supplies and may not be able to manufacture additional products, therefore leading to failure, in this case, the MTD for the supply chain is three months. Recovery Time Objective (RTO) RTO is the period following a disaster that an individual IT system may remain offline. This is the amount of time it takes to identify that there is a problem and then perform recovery ( for example, restore from backup or switch in an alternative system) Work Recovery Time (WRT) WRT represents the time following systems recovery, when there may be added work to reintegrate different systems, test overall functionality, and brief system users on any changes or different working practices so that the business function is again fully supported. Recovery Point Objective (RPO) (RPO) is the amount of data loss that a system can sustain, measured in time. In other words , if a database is destroyed by malware, an RPO of 24 hours means that the data can be recovered (from a backup copy) to a point not more than 24 hours before the database was infected. For example, a customer-leads database for a sales team might be able to sustain the loss of a few hours' or days' worth of data (the salespeople will generally be able to remember who they have contacted and re-key the data manually). Conversely, order processing may be considered more critical, as any loss will represent lost orders and it may be impossible to recapture web orders or other processes initiated only through the computer system, such as linked records to accounting and fulfillment. BIA Endnote Maximum Tolerable Downtime (MTD) and Recovery Point Objective RPO help to determine which business functions are critical and to specify appropriate risk countermeasures. For example, if your RPO is measured in days, then a simple tape backup system should suffice; if RPO is zero or measured in minutes or seconds, a more expensive server cluster backup and a redundancy solution will be needed. Risk Prioritization Risk Mitigation/remediation is the overall process of reducing exposure to, or the effects of risk factors. There are several ways of mitigating risk. If you deploy a countermeasure that reduces exposure to a threat or vulnerability, that is risk deterrence (or reduction). Risk reduction refers to controls that can either make a risk incident less likely or less costly (or perhaps both). ○ For example, if fire is a threat, a policy controlling the use of flammable materials on-site reduces likelihood, while a system of alarms and sprinklers reduces impact by (hopefully) containing any incident to a small area. ○ Another example is off-site data backup, which provides a remediation option in the event of servers being destroyed by fire. Risk avoidance Additional risk response strategies are as follows: ○ Risk Avoidance - This means that you stop doing the activity that is risk-bearing. For example, a company may develop an in-house application for managing inventory and then try to sell it. If while selling it, the application is discovered to have numerous security vulnerabilities that generate complaints and threats of legal action, the company may make the decision that the cost of maintaining the security of the software is not worth the revenue and withdraw from sale. Another example involves the use of an application that creates an automated and convenient resource for the organization that makes the employees jobs exponentially easier, but also introduces a major security vulnerability. Once that vulnerability is discovered the organization may opt to stop using that software. Risk Transference Aside from risk avoidance there is also risk transference Risk transference involves assigning risk to a third party such as an insurance company or contract with a supplier that defines liabilities. For example, a company could stop in-house maintenance of an e-commerce site and contract the services to a third party would be liable for any fraud or data theft Risk Acceptance Risk acceptance (or retention) means that no countermeasures are put in place either because the level of risk does not justify the cost or because there will be unavoidable delay before the countermeasures are deployed. In this case, you should continue to monitor the risk (as opposed to ignoring it). Communication of Risk Factors To ensure that the business stakeholders understand each risk scenario, you should articulate it such that the cause and effect can clearly be understood by the business owner of the asset. ○ A DoS risk should be put into plain language that describes how the risk would occur and, as a result, what access is being denied to whom, and the effect to the business. For example: "As a result of malicious or hacking activity against the public website, the site may become overloaded, preventing clients from accessing their client order accounts. This will result in a loss of sales for so many hours and a potential loss of revenue of so many dollars." Risk Register A risk register is a document showing the results of risk assessments in a comprehensible format. The register may resemble the traffic light grid with columns for impact and likelihood ratings, date of identification, description, countermeasures, owner/route for escalation, and status. A risk register should be shared between stakeholders (executives, department managers, and senior technicians) so that they understand the risks associated with the workflows that they manage. Training and Exercises Part of the risk management framework is ongoing monitoring to detect new sources of risk or changed risk probabilities or impacts. Security controls will, of course, be tested by actual events, but it is best to be proactive and initiate training and exercises that test system security. Training/ Exercises will include: ○ Tabletop Exercises ○ Penetration Testing ○ Red and Blue Team Exercises Table Top Exercises A tabletop exercise is a facilitator-led training event where staff practice responses to a particular risk scenario. The facilitator's role is to describe the scenario as it starts and unfolds, and to prompt participants for their responses. As well as a facilitator and the participants, there may be other observers and evaluators who witness and record the exercise and assist with follow-up analysis. A single event may use multiple scenarios, but it is important to use practical and realistic examples, and to focus on each scenario in turn. These are simple to set up but do not provide any practical evidence of things that could go wrong, time to complete, and so on. Penetration Testing You might start reviewing technical and operational controls by comparing them to a framework or best practice guide, but this sort of paper exercise will only give you a limited idea of how exposed you are to "real world" threats. A key part of cybersecurity analysis is to actively test the selection and configuration of security controls by trying to break through them in a penetration test (or pen test). When performing penetration testing, there needs to be a well-defined scope setting out the system under assessment and limits to testing, plus a clear methodology and rules of engagement. There are many models and best practice guides for conducting penetration tests. One is NISTs csrc.nist.gov/publications/detail/sp/800-115/final. (specific models for penetration testing will NOT be on the CySA+ exam, but knowing what a penetration test is is crucial) Red and Blue (and White) Team Exercises Penetration testing can form the basis of functional exercises. One of the best-established means of testing a security system for weaknesses is to play "war game" exercises in which the security personnel split into teams: Red team - this team acts as the adversary, attempting to penetrate the network or exploit the network as a rogue internal attacker. The red team might be selected members of in-house security staff or might be a third-party company or consultant contracted to perform the role Blue team - this team operates the security system with a goal of detecting and repelling the red team White team - this team sets the parameters for the exercise and is authorized to call a halt if the exercise gets out of hand or should no longer be continued for business reasons. The white team will determine objectives and success/ fail criteria for red and blue teams. The white team also takes responsibility for reporting the outcomes of the exercises, diagnosing lessons learned, and making recommendations for improvements to security controls. CySA+ Lesson 7 Part B Objectives 7B covers the information required for CySA+ exam objectives: ○ Explain the importance of frameworks, policies, procedures and controls Frameworks, Policies, and Procedures Some organizations have developed IT service frameworks to provide best practice guides to implementing IT and cybersecurity. As compliance is a critical part of information security, it is important that you understand the role and application of these frameworks. The use of a framework allows an organization to make an objective statement of its current cybersecurity capabilities, identify a target level of capability, and prioritize investments to achieve that target. Enterprise Security Architecture (ESA) An enterprise security architecture (ESA) framework is a list of activities and objectives undertaken to mitigate risks. Frameworks can shape company policies and provide checklists of procedures, activities, and technologies that should ideally be in place. This is valuable for giving a structure to internal risk management procedures and also provides an externally verifiable statement of regulatory compliance. Prescriptive Frameworks Many ESA frameworks adopt a prescriptive approach. These frameworks are usually driven by regulatory compliance factors. In a prescriptive framework, the controls used in the framework must be deployed by the organization. Each organization will be audited to ensure compliance. Examples of ESA frameworks include: ○ Control Objectives for information and Related Technology (COBIT) ○ International Organization for Standardization (ISO) 27001 ○ Payment Card Industry Data Security Standard (PCI DSS) Inside of these frameworks we use a maturity Model ○ Maturity Model - A component of an ESA framework that is used to assess the formality and optimization of security control selection and usage and address these gaps ○ Maturity models review an organization against expected goals and determine the level of risk the organization is exposed to based on it Risk Based Framework Since prescriptive frameworks can make it difficult for the framework to keep pace with a continually evolving landscape since it is checklist driven, an organization may opt for a risk based approach Risk Based Framework is a framework that uses risk assessment to prioritize security control selection and investment NIST Cybersecurity Framework A relatively new risk based framework that is focused on IT security over IT service provision This framework covers three core areas ○ Framework core - Identifies five cyber security functions (Identify, protect, Detect, Respond, and Recover) and each function can be divided into categories and subcategories ○ Implementation tiers - Assesses how closely core functions are integrated with the organization's overall risk management process and each tier is classed as Partial, Risk Informed, Repeatable, and Adaptive ○ Framework Profiles - Used to supply statements of current cyber security outcomes and target cyber security outcomes to identify investments that will be most productive in closing the gap in cybersecurity capabilities shown by comparison of the current and target profiles. (You want to capture a baseline and see where you are in terms of the framework. If you are at a low quality and you want to be at a high quality you can identify the things that need to be done to get to that high level Audits and Assessments Auditing is an examination of the compliance of an organization to standards set by external or internal entities. These examinations will include: ○ Quality Control and Quality Assurance ○ Verification and Validation ○ Audits ○ Scheduled reviews and continual improvement Quality Control (QC) The process of determining whether a system is free from defects and deficiencies You want to make sure that a car you buy meets the QC standards as any defects or hardware/ technical deficiencies could result in catastrophic events “Bruh, I can’t turn!” Quality Assurance (QA) QA is defined as the processes that analyze what constitutes quality and how it can be measured and checked QA is the actions that happen at the time of putting something together Verification and Validation (V&V) QC and QA takes the form of Verification and Validation (V&V) within software development Verification - A compliance-testing process to ensure that the security system meets the requirements of a framework or regulatory environment, or that a product or system meets its design goals Validation - The process of determining whether the security system is fit for purpose (once the code has been installed does it do what it’s meant to do?) Assessments and Evaluation Assessment - The process of testing the subject against a checklist of requirements in a highly structured way for a measurement against an absolute standard ○ Involves an “absolute standard” and thus, as mentioned above will involve a checklist approach Evaluation - A less methodical process of testing that is aimed at examining outcomes or proving usefulness of a subject being tested ○ Evaluation is more likely to use comparative measurements and is more likely to depend on the judgement of the evaluator than on the checklist or framework Audit A more rigid process than an assessment or evaluation, in which the auditor compares the organization against a predefined baseline to identify areas that require improvement/remediation ○ Audits are generally required in regulated industries such as payment card and healthcare processing ○ These are not based on evaluations or assessments, these are based on known standards. These standards are requirements that everyone has to meet. ○ This is not tailored to the organization like with assessments, your organization must meet the standard. Scheduled Reviews and Continual Improvement The ultimate step in incident response is a "lessons learned" review and report, in which any improvements to procedures or controls that could have helped to mitigate the incident are discussed. A system of scheduled reviews extends this "lessons learned" approach to a regular calendar-based evaluation. Scheduled reviews are particularly important if the organization is lucky enough not to suffer from frequent incidents. To prepare for a scheduled review you will complete a report detailing some of the following: ○ Major incidents experienced in the last period. ○ Trends and analysis of threat intelligence, both directly affecting your company and the general cybersecurity industry. ○ Changes and additions to security controls and systems ○ Progress toward adopting or updating compliance with a framework or maturity model. Continuous Monitoring In contrast to this reactive approach, continuous security monitoring (CSM), also referred to as security continuous monitoring, is a process of continual risk reassessment. Rather than a process driven by incident response, CSM is an ongoing effort to obtain information vital in managing risk within the organization. CSM ensures that all key assets and risk areas are under constant surveillance by finely tuned systems that can detect a wide variety of issues. CSM architecture carefully tracks the many components that make up an organization Network traffic, internal and external communications, host maintenance, or business operations Detective controls Detective controls identify something that has already happened and try to determine the what, when, and how of the event. Examples of detective controls are IDS and audit logs. Common Vulnerability Scoring System CVSS The CVSS v3.1 was release in June 2019 by NIST with the goal of providing an open framework for communicating the characteristics and impacts of software and hardware security vulnerabilities. This is a quantitative approach which aims provide accurate measurements of vulnerability characteristics by using scores that measure the severity of a risk, (this does not measure the probability of the risk itself) ○ This means that the score generated by these score metrics will only reveal how greatly a system would be impacted by the vulnerability CVSS 3.1 Metrics The CVSS over score will be determined by multiple factors: ○ AV (Attack Vector) ○ AC (Attack complexity) ○ PR (Privileges Required) ○ UI (User Interaction Required) ○ S (Scope) ○ C (confidentiality) ○ I (Integrity) ○ A (Availability) Each of the factors above have their own set of classifications that will determine the overall score of the severity. In a fully fleshed out CVSS 3.1 metrics report, the report would look like this: ○ CVSS:3.1/AV:N/AC:L/PR:H/UI:R/S:U/C:N/I:N/A:H Base Score Metrics CVSS:3.1/AV:N/AC:L/PR:H/UI:R/S:U/C:N/I:N/A:H In the above Metric report you see that the line begins with CVSS:3.1 This just indicates the version of the CVSS metrics that will follow The following metrics will be separated by a forward slash Base Score Metrics CVSS:3.1/AV:N/AC:L/PR:H/UI:R/S:U/C:N/I:N/A:H The next section will indicate what the attack vector for the vulnerability or threat is ○ N = Network: The vulnerability is bound to the network stack (extending to the internet) ○ A = Adjacent: This also means that the vulnerability is bound to the network stack, but the vulnerability is limited to the protocol level. Meaning that the attack must be launched from the same physical or logical network as the victim (802.11, 802.3, Bluetooth for example) ○ L = Local: The target system will likely need to be attacked locally (ex via SSH or exploiting r,w,x capabilities) ○ P = Physical: This attack requires the attacker to touch or manipulate the vulnerable component. In the above report, the Attack Vector is marked as being Network based. Base Score Metrics CVSS:3.1/AV:N/AC:L/PR:H/UI:R/S:U/C:N/I:N/A:H Next, the Attack Complexity is listed. ○ These will either be marked as Low (L) or High (H) Above the report has the complexity as Low. Base Score Metrics CVSS:3.1/AV:N/AC:L/PR:H/UI:R/S:U/C:N/I:N/A:H The next group is Privileges Required, meaning what privileges would the attacker have to have to successfully exploit the vulnerability. The options here are: ○ N (None): The attacker can exploit the vulnerability with no privileges ○ L (Low): The attacker can exploit the vulnerability with low privileges ○ H (High): The attacker can exploit the vulnerability with high privileges In our metric above the attacker must have high privileges to exploit the vulnerability Base Score Metrics CVSS:3.1/AV:N/AC:L/PR:H/UI:R/S:U/C:N/I:N/A:H UI (User Interaction) Means whether or not a user must interact with something in order for the vulnerability to be exploited. ○ N (None): Means that the vulnerability can be exploited with no user interaction ○ R (Required): Means that successful exploitation of the vulnerability requires a user to take some action before the vulnerability can be exploited In our above example user interaction is marked as “required” indicated by the “R” Base Score Metrics CVSS:3.1/AV:N/AC:L/PR:H/UI:R/S:U/C:N/I:N/A:H S (Scope): This determines whether a successful attack impacts a component other than the vulnerable component itself. ○ U (Unchanged): An exploited vulnerability can only affect resources managed by the same security authority. In this case the vulnerable component and the impacted component are either the same, or are both managed by the same security authority ○ C (changed): An exploited vulnerability can affect resources beyond the scope managed by the security authority of the vulnerable component. In this case the vulnerable component and the impacted component are managed by different security authorities. Our example has Unchanged set as the scope Base Score Metrics CVSS:3.1/AV:N/AC:L/PR:H/UI:R/S:U/C:N/I:N/A:H The following three metrics are the three aspects of the CIA triad ○ C: Confidentiality ○ I: Integrity ○ A: Availability This is specifying which of the aspects of the triad are being affected by the vulnerability. In some cases vulnerabilities can affect multiple components, but that it not always the case. The components of the CIA triad are then marked as either N (None), L (Low), or H High). In our example C and I are labeled as having No impact toward them, while Availability is marked with High impact. CVSS 3.1 calculator https://www.first.org/cvss/calculator/3.1 provides a CVSS 3.1 calculator that can be used to easily bring up the score based on selected metrics. CVSS Ranges Using the CVSS allows an analyst to prioritize vulnerabilities, focusing more on the most critical ones first, and the least critical ones at a later time. CVSS Ranges (Cont’d)