CySA+ Lessons 5-8 & CVSS (Quiz 2) PDF
Document Details
Uploaded by WellRoundedMorganite6487
Tags
Summary
This document is a set of notes on digital forensics, covering topics such as digital forensics techniques, analysis of various types of incident-related information, legal considerations, and best practices during investigation and analysis. Topics within this document include digital forensics procedures, identification, collection, analysis, and reporting as well as legal hold and forensics analyst ethics. It also describes the concept of data acquisition and different methods of data collection, such as live acquisition, crash dump, and hibernation file.
Full Transcript
CySA+ Lessons 5-7 (Quiz 2) CySA+ Lesson 5 Part A: Digital Forensics Digital Forensics Analysts Identify digital forensics techniques Analyze network-related IOCs Analyze host-related IOCs Analyze application-related IOCs Analyze lateral movement and pivot IOCs Digital forensics is the...
CySA+ Lessons 5-7 (Quiz 2) CySA+ Lesson 5 Part A: Digital Forensics Digital Forensics Analysts Identify digital forensics techniques Analyze network-related IOCs Analyze host-related IOCs Analyze application-related IOCs Analyze lateral movement and pivot IOCs Digital forensics is the science of collecting evidence from computer systems to a standard that will be accepted in the court of law. Like DNA or fingerprints, digital evidence is mostly latent. (Meaning that the evidence cannot be seen with the naked eye; rather it must be interpreted using a machine process) Cybersecurity analysts may be called upon to work closely with forensic analysts following an incident As part of a CSIRT (Computer Security Incident Response Team) or SOC (Security Operations Center) a forensics analyst may play several roles in following a security incident or in general support of cyber security, such as: ○ Investigating and reconstructing the cause of a cybersecurity incident. ○ Investigating whether any crimes, compliance violations, or inappropriate behavior has occurred. ○ Following forensics procedures to protect evidence that may be needed if a crime has occurred. ○ Determining if sensitive, protected data has been exposed. ○ Contributing to and supporting processes and tools used to protect evidence and ensure compliance. ○ Supporting ongoing audit processes and record maintenance Digital Forensics Procedures Your organization may have legal obligations when investigating a cybersecurity incident. You will have obligations to your organization and its stakeholders to find the underlying cause of the incident. It is important to have written procedures to ensure that you handle forensics properly, effectively, and in compliance with the applicable regulations. Forensics investigation will entail of the following procedures within four general phases: ○ Identification ○ Collection ○ Analysis ○ Reporting Digital Forensics Procedures (Identification) During the Identification phase you want to ensure that the scene is safe. ○ You do not want to be collecting evidence in the middle of an unsafe or active situation Secure the scene to prevent contamination of evidence. Record the scene using video and identify witnesses for an interview. Identify the scope of evidence being collected Digital Forensics Procedures (Collections) Ensure authorization to collect the evidence using tools and methods that will withstand legal scrutiny. Document and prove the integrity of evidence as it is collected and ensure that it is stored in secure, tamper-evident packaging. Digital Forensics Procedures (Analysis) Create a copy of evidence for analysis, ensuring that the copy can be related directly to the primary evidence source. Use repeatable methods and tools to analyze the evidence. Digital Forensics Procedures (Reporting) Create a report of the methods and tools used, and then present findings and conclusions Legal Hold Legal hold refers to the fact that information that may be relevant to a court case must be preserved. Information subject to legal hold might be defined by regulators or industry best practice, or there may be a litigation notice from law enforcement or attorneys pursuing a civil action. This means that computer systems may be taken as evidence, with all the obvious disruption to a network that entails. Forensics Analyst Ethics It is important to note that strong ethics principles must guide forensics analysis. ○ Analysis must be performed without bias. Conclusions and opinions should be formed only from the direct evidence under analysis. ○ Analysis methods must be repeatable by third parties with access to the same evidence. ○ Ideally, the evidence must not be changed or manipulated. If a device used as evidence must be manipulated to facilitate analysis (disabling the lock feature of a mobile phone or preventing remote wipe for example), the reasons for doing so must be sound and the process for doing so must be recorded. Defense counsel may try to use any deviation of good ethical and professional behavior to have the forensics investigators findings dismissed. Data acquisition Data acquisition is the process of obtaining a forensically clean copy of data from a device held as evidence. If the computer system or device is not owned by the organization, there is the question of whether search or seizure is legally valid. This impacts bring-your-own-device (BYOD) policies. For example, if an employee is accused of fraud you must verify that the employee's equipment and data can be legally seized and searched. Any mistake may make evidence gained from the search inadmissible. Data acquisition usually proceeds by using a tool to make an image from the data held on the target device. An image can be acquired from either volatile or nonvolatile storage. The general principle is to capture evidence in the order of volatility, from more volatile to less volatile. The ISOC best practice guide to evidence collection and archiving, published as tools.ietf.org/html/rfc3227, sets out the general order as follows: ○ CPU registers and cache memory (including cache on disk controller, GPUs, etc) ○ Contents of system memory (RAM) including temporary file systems/swap space/ virtual memory ○ Data on persistent mass storage devices (HDDs, SSDs, flash memory devices) ○ Remote logging and monitoring data. ○ Physical configuration and network topology ○ Archival media Order of Volatility System Memory Image Acquisition A system memory dump creates an image file that can be analyzed to identify the processes that are running, the contents of temporary file systems, Registry data, network connections, cryptographic keys, and so on. ○ It can also be a means of accessing data that is encrypted when stored on a mass storage device. System memory contains volatile data. There are various methods of collecting the contents of system memory: ○ Live acquisition ○ Crash dump ○ Hibernation file ○ Page file Live Acquisition Live acquisition involves a specialized hardware or software tool that can capture the contents of memory while the computer is running. Unfortunately, this type of tool needs to be pre-installed as it requires a kernel mode driver to function. Two examples are: ○ Memoryze from FireEye (fireeye.com/services/freeware/memoryze.html) ○ F-Response TACTICAL (f-response.com/software/tac). User mode tools, such as running dd on a Linux host, will only affect a partial capture. Memoryze is free to download, F-Response is not Crash Dump When Windows encounters an unrecoverable kernel error, it can write contents of memory to a dump file. On modern systems, there is unlikely to be a complete dump of all the contents of memory, as these could take up a lot of disk space. However, even mini dump files may be a valuable source of information. Hibernation File This file is created on the disk when the computer is put into a sleep state. If it can be recovered, the data can be decompressed and loaded into a software tool for analysis. The drawback is that network connections will have been closed, and malware may have detected the use of a sleep state and performed anti forensics. Page File This stores pages of memory in use that exceed the capacity of the host's RAM modules. When a system is low on physical memory the OS will write pages of memory to disk to free up space for the current program that demands more RAM (This is known as Virtual Memory) The pagefile is not structured in a way that analysis tools can interpret, but it is possible to search for strings. Page File (Cont’d) Disk Image Acquisition Disk image acquisition refers to nonvolatile storage media, such as hard disk drives (HDDs), solid-state drives (SSDs), and USB flash media drives. To obtain a forensically sound image from non-volatile storage, you need to ensure that nothing you do alters data or metadata (properties) on the source disk or file system. You must also decide how to perform the acquisition: ○ Live Acquisition: Involves copying data while the computer is still running. This may capture more evidence or more data for analysis and reduce impact on overall services, but data on the actual disks will have changed, so this method may not produce legally acceptable evidence. It may also alert the adversary and allow time for them to perform anti-forensics ○ Static acquisition by shutting down the computer. This does run the risk that the malware detects the shutdown process and performs anti-forensics to try to remove traces of itself. ○ Static acquisition by pulling the plug: This means disconnecting the power at the wall socket (not the hardware power off button). This most likely preserves the storage devices in a forensically clean state, but there is risk of data corruption. Whichever method is used, it is imperative to document the steps taken and supply a timeline for your actions Write Blockers Write blockers assure that the image acquisition tools that you use do not contaminate the source disc. Write blockers prevent any data on the disk or volume from being changed by filtering write commands at the firmware driver level. ○ MOUNTING A DRIVE AS READ-ONLY WITHIN A HOST OS IS NOT SUFFICIENT. DO NOT TAKE CHANCES. A write blocker can be implemented as a hardware device or as software running on the forensics workstation. ○ For example CRU Forensic UltraDock write blocker appliance supports ports for all main host and drive adapter types. Hashing A critical step in the presentation of evidence will be to prove that analysis has been performed on an image that is identical to the data present on the physical media and that neither data set has been tampered with. ○ The standard means of proving this is to create a cryptographic hash or fingerprint of the disk contents and any derivative images made from it. ○ When comparing hash values, you need to use the same algorithm that was used to create the reference value. Two hashing algorithms that we’ll go over are: ○ Secure Hash Algorithm (SHA) ○ Message Digest Algorithm (MDA/MD5) Message Digest Algorithm (MDA/MD5) Designed in 1990. Updated to MD5 which was released in 1991 with a 12-bit hash value. No longer secure Despite lack of security, many forensics tools default to using MD5 as it is a bit faster than SHA, it offers better compatibility between tools, and the chances of an adversary exploiting a collision in that context are more remote. Secure Hash Algorithm (SHA) SHA is one of the Federal Information Processing Standards (FIPS) developed by NIST for the US government. SHA was created to address weaknesses in MDA. There are two versions of the standard in common use: SHA-1 was released in 1995 to address the weaknesses of the original SHA algorithm. Uses 160-bit digest. Also had its own weaknesses and is no longer used SHA-2 Addresses the weaknesses of SHA-1 and uses longer bit digests (256 and 512 bits) Carving When you delete a file you're not really deleting the file, you are simply deleting the metadata that tells the OS where that file is located. That tells the OS that the block that the file is contained in is open and free to be written over. (This is called slack space) File carving is the process of extracting data from an image when that data has no associated file system metadata. ○ A file-carving tool analyzes the disk at sector/page level and attempts to piece together data fragments from unallocated and slack space to reconstruct deleted files, or at least bits of information from deleted files. Some file carving tools include: EnCase, FTK, and Autopsy ○ Scalpel is an open source command line tool that can be used to conduct file carving. Chain of Custody The chain of custody is a record of evidence handling from collection through presentation in court. Evidence can be hardware components, electronic data, or telephone systems. The chain of custody documentation reinforces the integrity and proper custody of evidence from collection, to analysis, to storage, and finally to presentation. When security breaches go to trial, the chain of custody protects an organization against accusations that evidence has either been tampered with or is different than it was when it was collected. Every person in the chain who handles evidence must log the methods and tools they used. Physical devices taken from the crime scene should be identified, bagged, sealed,and labeled. Tamper-proof (AKA "tamper-evident") bags cannot be opened and then resealed covertly. It is also appropriate to ensure that the bags have anti-static shielding to reduce the possibility that data will be damaged or corrupted on the electronic media by electrostatic discharge (ESD). Criminal cases or internal security audits can take months or years to resolve. You must be able to preserve all the gathered evidence in a proper manner for a lengthy period. Computer hardware is prone to wear and tear, and important storage media like hard disks can even fail when used normally, or when not used at all. A failure of this kind may mean the corruption or loss of your evidence, both of which may have severe repercussions for your investigation. Faraday bags CySA+ Lesson 5B Analyze Network-related IOCs This section relates also to objective 4.3: ○ Given an incident analyze potential indicators of compromise Analysts must be able to identify and explain what IOCs there are in network environments as well as possible mitigation strategies to such IOCs. Traffic Spikes and DDoS IoCs DDoS attacks are often devastating to availability because even the largest and most well-defended networks can be overwhelmed by the sheer volume and distribution of malicious traffic An unexpected surge in traffic from internet hosts could be a sign of an ongoing DDoS attack A traffic spike is an increase in the number of connections that a server, router, or load balancer is processing. Traffic statistics can reveal indicators of DDoS-type attacks, but you must establish baseline values for comparison. Traffic spikes are diagnosed by comparing the number of connection requests to a baseline. Bandwidth Consumption and DRDoS Bandwidth consumption can either be measured as the value of bytes sent or received or as a percentage of the link utilization. In a Distributed Reflection DOS (DRDoS) (or amplification attack) the adversary spoofs the victim’s IP address and tries to open connections with multiple servers. Those servers direct their SYN/ACK response to the victim server. This rapidly consumes the victims available bandwidth. DDoS Mitigation While load balancers and IP filter can offer protection from DDoS attacks to an extent these techniques can easily be overwhelmed by the large and distributed nature of botnets that are usually used in modern DDoS attacks. The key to repelling a sustained attack lies in real-time analysis of log files to identify the pattern of suspicious traffic and redirecting the malicious traffic to a blackhole or sinkhole. Beaconing Intrusion IOCs Command and control (C&C/ C2) refers to an infrastructure of hosts with which attackers direct, distribute, and control malware. ○ This is made possible through the utilization of a botnet After compromising systems and turning them into zombies, the attacker adds these systems to a pool of resources. The attacker then issues commands to the resources in this pool. A command can be something as simple as a ping (or “heartbeat”) to verify that the bot is still alive in the botnet. (known as beaconing) Another command can be to infect hosts that the bot is connected to in a network. A bot may beacon its C2 server by sending simple transmissions at regular intervals to unrecognized or malicious domains. Irregular Peer-to-peer (P2P) traffic in the network could indicate that a bot is communicating with a centralized C2 server. Hosts in the C2 network are difficult to pin down because they frequently change DNS names and IP addresses. Beaconing activity is detected by capturing metadata about all the session established or attempted and analyzing it for patterns that constitute suspicious activity One problem though is that many legitimate applications also perform beaconing (NTP servers, autoupdate systems, cluster services, etc). So indicators would have to include known bad IP addresses, rate and timing of attempts and the size of response packets. (when in doubt just google it.) Command and Control methods Many different platforms and methods can be utilized in the C&C process: ○ Internet Relay Chat (IRC) ○ HTTP and HTTPS ○ DNS ○ Social media websites ○ Cloud services ○ Media and document files Internet Relay Chat (IRC) Internet Relay Chat is a group communication protocol. IRC networks are divided into discrete channels, which are the individual forums used by clients to chat. With IRC it is easy for an attacker to set up an IRC server and begin sending interactive control directives to individual bits connected to the IRC server IRC traffic is relatively easy for admins to detect and many organizations have no use for this protocol and thus block all communications from it. IRC in general is on the decline so use of IRC as a C2 channel is no longer widely used. HTTP and HTTPS Unlike IRC, communication over HTTP and HTTPS is still widely used in almost every organizational network. Blocking these protocols entirely may not be a viable option, and could cause major disruption to organizational productivity and processes. HTTP and HTTPS is a more viable option as a C2 channel as well because it can be difficult to separate malicious traffic from legitimate traffic. When used in C&C, the attacker embeds commands encoded within HTML to multiple web servers as a way to communicate with its bots. (These commands and responses can be encrypted) One way to mitigate this is to use an intercepting proxy at the network edge. ○ The proxy can decrypt and inspect all traffic, and only re-encrypt and forward legitimate requests and responses. DNS Because DNS traffic is not inspected or filtered in most private networks, attackers see an opportunity for their control messages to evade detection. DNS and C&C channel is effective because the bot doesn't even need to have a direct connection to outside the network. All it needs to do is connect to a local DNS resolver that executes lookups on authoritative servers outside the organization, and it can receive a response with a control message Social Media Websites Facebook, Twitter, and LinkedIn can be (And have been) vectors for C&C operations Social media platforms like these are a way to “Live off the land”, issuing commands through the platforms messaging functionality or their account profiles. ○ For example, many businesses implicitly trust LinkedIn ○ An attacker could set up an account and issue commands to bots through the accounts profile, using fields like employment status, employment history, status updates and more. There is evidence that a C&C operation used Twitter accounts to post seemingly random hashtags. These hashtags were encoded command string, and bots would scour Twitter messages for these hashtags to receive their orders ○ intego.com/mac-security-blog/flashback-mac-malware-uses-twitter-as-command-and-control-center Cloud Services Cloud companies that supply a wide variety of services, especially infrastructure and platform services are also at risk of being a C&C vector. Attackers used Google’s App Engine platform to send C&C messages to booths through a custom application hosted by the service. App engine is attractive to attackers because it offers free, limited access to the service. ○ Instead of incurring the cost of setting up their own servers, attackers use a cloud company’s reliable and scalable infrastructure to support their C&C operations. Media and Document FIles Media file formats like JPEG, MP3 and MPEG use metadata to describe images, audio and video. An attacker could embed control messages inside this metadata then send the media file to its bots over any number of communication channels that support media sharing. Because monitoring systems do not typically look at media metadata, the attacker may be able to evade detection. Rogue Devices on a Network A rogue device is any unauthorized piece of electronic equipment that is attached to a network or assets in an organization. A USB thumb drive may be attached to a web server to siphon sensitive data. An extra NIC or Wi-Fi adapter may be installed on an employee's workstation to create a side channel for an attack. An employee's personal smartphone may be connected to the network, exposing the network to malware. The risk from rogue devices is a major reason why you should have an inventory of all devices in your organization. Rogue system detection refers to a process of identifying (and removing) machines on the networks that are not supposed to be there. You should be aware that "system" could mean several distinct types of device (and software): ○ Network Taps - a physical device might be attached to cabling to record packets passing over that segment. Once attached, taps cannot usually be detected from other devices inline with the network, so physical inspection of the cabled infrastructure is necessary ○ WAPS - While there are dedicated rogue WAPS such as WiFi pineapple (distributed by hak5), anyone with access to your network can create a WAP, even from a phone or laptop. In some cases this can be used to mislead others into connecting to their rogue access point, which then opens the door for a MITM attack. ○ Servers - an adversary may also try to set up a server as a malicious honeypot to harvest network credentials or other information. ○ Wired/Wireless Clients - End users can knowingly or unknowingly introduce malware to your network. They could also perform network reconnaissance or data exfiltration. Software - Rogue servers and applications, such as malicious DHCP or DNS servers, may be installed covertly on authorized hardware. Smart appliances - Devices such as printers, webcams, and VoIP handsets have all suffered from exploitable vulnerabilities in their firmware If use of these assets are not tracked and monitored, they could represent potential vectors for an adversary. Rogue machine detection There are several techniques available to perform rogue machine detection: ○ Visual inspection of ports/switches ○ Network mapping/host discovery - enumeration scanners should identify hosts via banner grabbing/fingerprinting. Finding rogue hosts on a large network from a scan may still be difficult ○ Wireless monitoring - Discover unknown or unidentifiable service set identifiers (SSIDs) showing up within range of the office ○ Packet sniffing - Reveal the use of unauthorized protocols on the network and unusual peer-to-peer communication flows ○ NAC and IDS - Security suites and appliances can combine automated network scanning with defense and remediation suites to try to prevent rogue devices accessing the network. Data exfiltration IoCs Usually access is not the end goal of the attacker; they most likely also want to steal sensitive data from your organization. The malicious transfer of data from one system to another is called data exfiltration. Data exfiltration can be performed over many different types of network channels. You will be looking for unusual endpoints or unusual traffic patterns in any of the following: ○ HTTP/HTTPS ○ DNS ○ FTP, IM, P2P, email ○ Explicit Tunnels such as SSH or VPNS HTTP Data Exfiltration HTTP (or HTTPS) transfers to consumer file sharing sites or unknown domains. One typical approach is for an adversary to compromise consumer file sharing accounts (OneDrive, Dropbox, or Google Drive for instance) and use them to receive the exfiltrated data. The more the organization allows file sharing with external cloud services, the more channels they open that an attacker can use to exfiltrate critical information. If the data loss systems detect a sensitive file outbound for Dropbox, for example, they may allow it to pass. Those systems won't necessarily be able to discern legitimate from illegitimate use of a single file. So, an attacker doesn't even need to have access to the employees' official Dropbox share—the attacker can open their own share, drop the files in, and then the data is leaked. HTTP requests to database-backed services. An adversary may use SQL injection or similar techniques to copy records from the database that they should not normally have access to. Injection attempts can be detected by web application firewalls (WAF). Other indicators of injection-style attacks are spikes in requests to PHP files or other scripts, and unusually large HTTP response packets. CySA+ Lesson 6A: Explain Incident Response Processes Objectives: Explain the importance of the incident response process Given a scenario, apply the appropriate incident response procedure. Incident Response Phases Incident response procedures are the guideline based actions taken to deal with security related events. The incident would be the breach or suspected breach that will be responded to by either a CSIRT (Computer Security Incident Response Team) or a CERT (Computer Emergency Response Team) NIST Computer Security Incident handling Guide identifies the following stages in an incident response life cycle: ○ Preparation ○ Detection and Analysis ○ Containment ○ Eradication and Recovery ○ Post-Incident Activity Incident Response: Preparation During this phase you will want to make your system resilient against attacks before the attacks can even happen. Aside from the technical defences that this involves, this phase will include creating policies and procedures for employees to adhere to, as well as setting up confidential lines of communication that are separate from your internal networks communication systems. Incident Response: Detection and Analysis You want to be able to detect if an incident has taken place and be able to determine how severe the damage incurred by the intrusion is. The act of determining which parts of your network have been affected more severely than others, and which parts are more important than others, thus requiring a more urgent response, is called “triage” Incident Response: Containment This step involves limiting the scope and magnitude of the incident. You want to limit the impact on employees, customers and business partners. You will also want to ensure that you are securing data from further breaches. Incident Response: Eradication and Recovery Once the incident is contained (meaning the problem is no longer spreading or affecting the system) You can then move to eradicate the problem, and return the system back to normal. This phase may have to be returned to more than once, as you may detect more issues as you go through your system eradicating them. Incident Response: Eradication and Recovery (Cont’d) Incident Response: Post-Incident Activity You want to ensure that you document everything that the incident involves. ○ What led up to the incident? ○ What caused it to be detected? ○ What remediation methods were used? ○ What worked and what did not work? ○ What could have been done to avoid the incident from occurring? This phase is often referred to as the “Lessons Learned” phase After this phase you return to preparation once again to ensure that you use your lessons learned to make your system more resilient to attacks in the future. Data Criticality and Prioritization It is important to know how to allocate resources effectively and efficiently. This involves assessing severity and prioritization of affected systems. This triage process takes into account a multitude of factors. Most importantly is whether or not a breach revolves around data that deals with private or confidential information. Some of these categories of confidential information are: ○ PII ○ SPI ○ PHI ○ Financial Information ○ Intellectual property ○ HVA ○ Corporate Information PII (Personally Identifiable Information) Just like how it sounds, PII involves data that can be used to identify an individual. This includes data such as: ○ A subjects full name ○ A subjects birth date ○ A subjects birth place ○ A subjects address ○ Subjects Social Security Number ○ Other unique identifiers SPI (Sensitive Personal Information) This is privacy sensitive information about a subject, but it is NOT identifying information. This is anything about a subject that could be harmful to them if made public. Data that falls under the umbrella of SPI is: ○ Religious beliefs ○ Political opinions ○ Trade union membership ○ Gender ○ Sexual orientation ○ Racial or ethnic origin ○ Health information PHI (Personal Health Information) Refers to medical records relating to a subject. Due to the nature of PHI a data leak involving PHI can be especially devastating since unlike a credit card number or bank account number it cannot be changed. A person's medical history is permanently a part of them. ○ This is why PHI is often a target of perpetrators of insurance fraud and black mail. In some cases reputational damage can be caused by a leak of PHI Financial Information This refers to data held about bank and investment accounts, as well as tax returns. IP (Intellectual Property) This is information created by an individual or company that usually involves the creation of products or services that they perform. IP can include: ○ Copyright works ○ Patents ○ Trademarks IP would likely be a target of a business competitor or counterfeiter. Corporate information This data involves information that is essential to how a business functions. This information includes: ○ Product development, production and maintenance ○ Customer contact and billing ○ Financial operations and controls ○ Legal obligations to keep accurate records to a certain time frame ○ Contractual obligation to third parties If a company is planning to merge with another. This information, if leaked with the goal of stock market manipulation could have a major impact on all of the parties involved. HVA (High Value Assets) A high value asset is a critical information system. This critical system will likely be relied upon for essential functions of the organization to continue. If any aspect of the the CIA triad were to be listed with this system there would likely be major consequences. HVAs must be easily identifiable by a response team, and any incident involving an HVA must be considered high priority. Reporting requirements This describes the requirement of notifying external parties when faced with certain types of incidents: ○ Data Exfiltration: An attacker breaks in and transfers out data to another system. (Note that even suspected breaches are treated the same as confirmed breaches) ○ Insider Data exfiltration: Someone inside the company perpetrates the data exfiltration ○ Device theft/loss: Device is lost or stolen ○ Accidental data breach: Human error or misconfiguration leads to a data breach (such as sending confidential information to the wrong person) ○ Integrity/Availability: Attacks that mainly target a service that is needed to be up at all times. Or an attack that corrupts essential data Response coordination Incident response may involve coordination between different departments depending on the type of situation that is being faced. ○ Senior leadership will be contacted so that they can consider how the incident affects their departments and interior functions. They can then make decisions accordingly. ○ Regulatory bodies will need to be contacted by companies that fall within the industry that they oversee ○ Law enforcement agencies may also need to be reached out to if there are legal implications caused by the attack. A legal officer will be the person to communicate with law enforcement if it is required ○ Human resources (HR) will be contacted if an insider threat is suspected of being the perpetrator of an attack so that no breaches of employment law or employment contracts are made. ○ Public Relations (PR) - any attacks that could cause the public to have a negative image of the company will require that PR get involved to handle damage control of the organization's public image. CySA+ Lesson 6B: Apply Detections and Containment Processes The OODA Loop A model Developed by the US military strategist Colonel John Boyd. ○ The Order of operations of this OODA Loop is: Observe Orient Decide Act The goal of the OODA loop is to prevent a purely reactionary response to an attack and take initiative with a measured, well thought out response. OODA: Observe You need information about the network and the specific incident and a means of filtering and selecting the appropriate data. In this step you identify the problem or threat and gain an understanding of the internal and external environment to the best of your ability. OODA: Orient Determine what the situation involves. Have you been alerted to the attack as it was happening, or are you just noticing an attack that has been occurring for some time? What does the attacker want? ○ Information? ○ To bring your system down? ○ To corrupt your stored data? What resources do the attackers have at their disposal? ○ Is it a script kiddie using a tool they bought off the internet or an APT that has created an attack to target you specifically. OODA: Decide What options do you have for countermeasures? What are your goals? Can we prevent a data breach from happening or should we focus on gathering forensic evidence to try to prosecute later? By having a decisive action that needs to be taken in mind, this can prevent you from losing time on decision making in the future. It can also prevent you from getting caught up in the phase of analysing the attack. (Analysis Paralysis) OODA: Act This is where you take the actions to remediate the situation as quickly as possible. Similar to the incident response phases, from here you may need to start the loop again until the situation is completely resolved. Lockheed Martin IDD The Lockheed Martin white paper for Intel Driven Defence defines “defensive capabilities” as the ability to: Detect—Identify the presence of an adversary and the resources at his or her disposal. Destroy—Render an adversary's resources permanently useless or ineffective. Degrade—Reduce an adversary's capabilities or functionality, (sometimes this can only be done temporarily). Disrupt—Interrupt an adversary's communications or frustrate or confuse their efforts. Deny—Prevent an adversary from learning about your capabilities or accessing your information assets. Deceive—Supply false information to distort the adversary's understanding and awareness. Impact analysis Damage incurred in an incident can have wide-reaching consequences, including: ○ Damage to data integrity and information system resources ○ Unauthorized changes and configuration of data or information systems ○ Theft of data or resources. ○ Disclosure of confidential or sensitive data ○ Interruption of services and system downtime. After the detection of an incident, triage is used to classify the security level and classification of the incident. Incident Security Level Classification Having classifications of security breaches can allow you to refine your quality of your Response. ○ Data Integrity ○ System process criticality ○ Downtime ○ Economic ○ Data correlation ○ Reverse engineering ○ Recovery time ○ Detection time Containment It is important to get past the analysis phase into a phase where you move to actually contain the breach. Especially since it's likely that the breach may have been going on for quite some time before it was even detected. ○ 20% of reported breaches take days to discover ○ 40% took months to discover ○ Only 10% of reported breaches were discovered within an hour When it is noticed (or suspected) that a system is compromised, you must move quickly to identify the correct containment technique. Your course of action will depend on and require several factors: ○ Safety and security of personnel ○ Preventing the intrusion from continuing ○ Identify if the intrusion is a primary or secondary one ○ Avoid alerting the attacker to the fact that the intrusion was discovered. ○ Preserve forensic evidence Isolation-Based Containment This method of containment involves removing an affected component from whatever larger environment it is a part of. This will disallow the affected system from being able to interface with the network, the other devices within it, and the internet as well. ○ This is achieved simply by pulling the network plug and ensuring that all wireless functionality is disabled. ○ You can also disable the switch port that the devices is using Segmentation-based Containment Segmentation uses VLANs, routing/subnets and firewall ACLs to prevent a host or group of hosts from communicating outside of a protected segment. Segmentation can be used for honeypot/honeynets to analyse an attackers traffic as it is redirected to said honeypot. This method can allow you to analyse the attack as it runs, but is not allowed to reach the intended target within your network. In most cases the attacker will assume that the attack is continuing successfully. Course of Action (COA) Matrix The CoA matrix is a table that shows the defensive capabilities available at each phase of the cyber kill chain Course of Action (COA) Matrix (Cont’d) CySA+ Lesson 6C: Apply Eradication, Recovery and Post-Incident Processes It is the duty of a CySA professional to suggest steps that can be taken during the latter part of the incident response cycle. A checklist can be created to help with analysing and developing insight into how to navigate your way through the eradication/recovery, and post-incident response phases Eradication After the previous steps have been completed (identify, analyze, contain) you can move on to the mitigation and eradication of the problem from your systems. To give an example, if there is a worm that has infected a system or network that you have now isolated, you can proceed to work on removing the worm from your system. If you do not isolate the system first you may be trying to remove something that is in the process of continuing to spread. Sanitization and Secure disposal If a contaminated system is deemed not worth the effort of fixing the problem, you may decide to simply destroy the affected system or drive and replace it with a back up. ○ For example you may opt to discard infected drives if you know that you have clean backups that can immediately be used to replace them Even in cases like this you want to ensure that any confidential data can not be gleaned from the discarded drives. Data/Drive sanitization is the process of irreversibly removing or destroying data stored on a memory device (ie: nonvolatile memory such as HHD, SSD, Flash Memory, CDs and DVDs) (CE) Cryptographic Erasure Cryptographic erasure is a method that is used with SEDs (Self Encrypting Drives) when you want the data on the drive to no longer be accessible. Everything that is written to an SED is encrypted automatically. If you no longer want that drive to be readable you delete the cryptographic key that allows access to the drive. While the data is still on the drive, it is encrypted and unrecoverable. Zero-fill If a cryptographic erase is not an option that can be used, you can also opt to use another sanitization method called Zero-fill Zero filling is a method of formatting a hard disk where the formatter wipes the disk contents by overwriting them with zero. Each bit present in the disk is replaced with a value of zero. Once the data is overwritten with zeros, the process cannot be undone from the hard drive. ○ This can hinder forensic tools from reconstructing any meaningful data from the drive Zero-fill is more useful for HDDs than SSDs due to the fact that a HDD and SSD communicate in different ways which locations are available for storing data. In the case of SSDs some may opt to instead use a SE (Secure Erase) utility like “Disk Wipe” or Heidi Computers’ “Eraser”. Reconstruction and Reimaging After a sanitization process, if you want to reuse the disk after it has been wiped you can reimage the host disk using a known clean backup that was created prior to the incident. Another option is to reconstruct a system using a configuration template or scripted install from trusted media Reconstitution of Resources Reconstitution is a method used when reimaging is not possible for your situation (perhaps due to the lack of access to backups). You are rebuilding your system and regaining or reconstructing what was lost. Reconstitution involves the following steps: ○ Analyzing the system for signs of malware after the system has been contained ○ Terminate suspicious processes and securely delete them from the file system ○ Identify and disable autostart locations in the file system, Registry, and task scheduler ○ Replace contaminated OS and application processes with clean version from a trusted source ○ Reboot the system and analyze for signs of continued malware infection ○ If there is continued infection and you can't identify its source in the file system investigate if either the firmware or a USB device being used has been infected ○ Once the systems tests for malware are negative, you can return it to its role in the system while continuing to monitor it closely to ensure that it does not portray any additional suspicious activity Recovery After the malware has been removed and the system has been brought back from its infection, you may also have to restore some lost capabilities, services, or resources. The following are some examples of incident recovery: ○ If data is maliciously deleted from a database you can restore the data if you have been creating backups prior to the attack. ○ If a DDoS takes down your web servers, you may need to manually reboot your servers and perform a health check on them before bringing them back up for public usage ○ If an employee inadvertently introduces malware into your system through their workstation you can attempt remedial measures through use of an antimalware software. If the problem persists you may have to wipe the entire drive and reinstall the OS. With recovery you want to ensure that the system is no longer vulnerable to the same attack that affected it before. Patching In the case that an attack was performed using a software or firmware exploit, the system will need to be patched (This of course assumes that a patch for the vulnerability exists in the first place which may not always be the case). You may also want to take time to determine why the system was not patched beforehand. ○ In some cases systems may need to be configured to automatically install updates and patches. Misconfiguration or human error may lead to this setting not being toggled on, or accidentally toggled off. If no patch is available you need to apply different mitigation methods, such as monitoring of the system or network segmentation. Restoration of Permissions After an incident you may need to check if an attacker was able to elevate permissions for themselves, or remove permission of those who had higher permissions in the system After the malware has been removed, the vulnerability has been patched, and system restored, you may also need to review the permissions that are on the system and ensure that they are as they should be. If not, you will have to go through and reinforce the proper permissions and privileges. Verification of Logging If an attacker can affect and disable your logging systems then they will do so. After an incident you want to ensure that your scanning, monitoring and logging systems are functioning properly following the incident, and that they have not been disabled or somehow subverted by the attacker. Vulnerability mitigation and system hardening Reducing the attack surface is a MAJOR part of system hardening. Simply put, reducing the attack surface means removing unnecessary functions or services that are not in use by the organization so that they can not be used as an attack vector by a malicious entity. Here are some examples: ○ Disable unused user accounts. ○ Deactivating/removing unnecessary components, including hardware, software, network ports, applications, operating system processes and services. ○ Implement patch management software that will allow you to test updates before deployment into your systems ○ Restrict access to USB and bluetooth on devices ○ Restrict shell commands and command line usage and functionality based on the privileges that a user should have Post-incident activities After a threat has been eradicated and the system restored there are some post incident activities that must be completed. Activities like report writing, evidence retention, and updates to response plans can help you to see what went wrong in this situation and what you can do in the future to prevent such an event from occurring again. Report writing This involves explaining all of the relevant details of a threat event, potential threat event and the effects that it can have on the organization It is important to note that the people reading these reports may not have the same technical knowledge as you do, so you must be able to explain technical subject matter in a non technical way. This way any executive decisions that have to be made can be made based on them understanding what your report entailed. Reports should: ○ Show the date of its composition as well as the names of the authors ○ Be marked confidential or sacred (based on what the requirements of said information are) ○ Start with an executive summary for the target audience, letting them know the TLDR of what the report is going to be about and what sections of the report go into which topics. (Especially useful for longer reports). ○ Points should be made directly, coherently, and concisely. (Don't beat around the bush). Evidence Retention If an attack has any legal or regulatory impact, evidence of the incident must be preserved Different regulatory bodies have different requirements for how long certain evidence must be retained If an organization wishes to pursue civil or criminal prosecution of the incidents perpetrators, the evidence must be collected and stored using the proper forensics procedures. Lessons Learned Report This post incident activity reviews the incident in an attempt to clearly identify how, if at all possible, the incident could have been avoided. Lessons learned reports can be structured to use the “six questions” method: ○ Who was the adversary? ○ Why was the incident perpetrated? ○ When did the incident occur/ when was it detected? ○ Where did the incident occur/ What was targeted? ○ How did the incident occur/ What methods did the attacker use? ○ What security controls would have provided better mitigation or improved the response? Change Control Process Corrective action and remediating controls need to be introduced in a planned way, following the organization's change control process. Change control validates the compatibility and functionality of the new system and ensures that the update or installation process has minimal impact on business functions CySA+ Lesson 7 Part A Objective 5.2 Given a scenario, apply security concepts in support of organizational risk mitigation Apply Risk Identification, calculation, and prioritization Process As cybersecurity professionals, you have the responsibility to identify risks and protect your systems from them. “Risk” is referring to the measure of your exposure to the chance of damage or loss. It signifies the likelihood of a hazard or dangerous threat to occur. Risk is often associated with the loss of a system, power, or network, as well as other physical losses. Risk can also affect people, practices and processes Risk Identification Process Enterprise Risk Management (ERM) is defined as the comprehensive process of evaluating, measuring, and mitigating the many risks that can affect an organization's productivity. Due to the modern era's heavy focus on information and interconnected systems across the world, ERM responsibilities is a responsibility taken on by the IT department. There are quite a few reasons for the adoption of ERM some examples include: ○ Keeping confidential customer information out of the hands of unauthorized parties ○ Keeping trade secrets out of the public sphere ○ Avoiding financial loss due to damaged resources ○ Avoiding legal trouble ○ Maintaining a positive public perception of the enterprises brand image ○ Establishing trust and liability in a business relationship ○ Meeting stakeholders’ objectives As a Cyber security analyst you may not be able to make all of the decisions regarding risk management. Such decisions may be made by business stakeholders or project management teams. However you may be in the position to understand the technical risks that exist and may need to bring them to the attention of decision makers. Risk Management Framework NIST’s Managing Information Security Risk special publication is one starting point for applying risk identification and assessment. NIST identifies four structural components or functions: ○ Frame ○ Assess ○ Respond ○ Monitor Frame Frame—Establish a strategic risk management framework, supported by decision makers at the top tier of the organization. The risk frame sets an overall goal for the degree of risk that can be tolerated and demarcates responsibilities. The risk frame directs and receives inputs from all other processes and should be updated as changes to the business and security landscape arise. Assess Identify and prioritize business processes/workflows. Perform a systems assessment to determine which IT assets and procedures support these workflows. Identify risks to which information systems and therefore the business processes they support are exposed. Respond Mitigate each risk factor through the deployment of managerial, operational, and technical security controls. Monitor Monitor—Evaluate the effectiveness of risk response measures and identify changes that could affect risk management processes. System Assessment Businesses don't make money off of just being secure. Their security is merely a means to protect the assets of the company that allow the company to run their business operations smoothly Assets must be valued according to the liability that the loss or damage of the asset would create: ○ Business continuity - refers to the losses from no longer being able to fulfill contracts and orders due to the breakdown of critical systems. ○ Legal - These are responsibilities in civil and criminal law. Security incidents could make an organization liable to prosecution (criminal law ) or for damages (civil law). ○ Reputational - These are losses due to negative publicity, and consequent loss of market position and customer trust. System Assessment Process In order to design a risk of assessment process, you must know what you want to make secure through a process of system assessments. This means compiling an inventory of its business processes and for each process, the tangible and intangible assets and resources that support them. These could include the following: ○ People - Employees, visitors, and suppliers. ○ Tangible assets - Buildings, furniture, equipment, machinery, electronic data files and paper documentation ○ Intangible assets - Ideas, commercial reputation, brand, and so on. ○ Procedures - supply chains, critical procedures, standard operating procedures, and workflows. Mission Essential Function Mission Essential Functions (MEF) are sets of an organization's functions that must be continued throughout, resumed rapidly after a disruption of normal operations The organization must be able to perform the functions as closely as possible. If there is a service disruption, the mission essential functions must be restored first. Asset/Inventory Tracking There are many software suites and associated hardware solutions available for tracking and managing assets (or inventory). An asset management database can be configured to store as much or as little information as is deemed necessary, though typical data would be type, model, serial number, asset ID, location, user(s), value, and service information. Threat and Vulnerability Assessment Each asset and process must be assessed for vulnerabilities and their consequent exposure to threats. This process of assessment will be ongoing, as systems are updated, and as new threats emerge. The range of information assets will highlight the requirements needed to secure those assets. For example, you might operate a mix of host OSs (Windows, Linux, and macOS, for instance) or use a single platform. You might develop software hosted on your own servers or use a cloud platform. The more complex your assets are, the more vulnerability management processes you will need to have in place. Risk Calculation When determining how to protect computer networks, computer installations, and information, risk calculation or analysis is a process within the overall risk management framework to assess the probability (or likelihood) and magnitude (or impact) of each threat scenario. Calculating risk is complex in practice, but the method can be simply stated as finding the product of these two factors: ○ Probability - The chance of the threat being realized ○ Magnitude - the overall impact that a successful attack or exploit would have. *Probability x Magnitude = RISK* Quantitative Risk Calculation A quantitative assessment of risk attempts to assign concrete values to the elements of risk. Probability is expressed as a percentage and magnitude as a cost (monetary) value, as shown in the following formula: EF is the percentage of the assets value that would be lost. The single loss expectancy (SLE) value is the financial loss that is expected from a specific adverse event. For example, if the asset value is $50,000 and the EF is 5%, the SLE calculator is as follows: If you know or can estimate how many times this loss is likely to occur in a year, you can calculator the risk cost on an annual basis: The problem with quantitative risk assessment is that the process of determining and assigning these values can be complex and time consuming. The accuracy of the values assigned is difficult to determine without historical data (often, it must be based on subjective guesswork). However, over time and with experience this approach can yield a detailed and sophisticated description of assets and risks and provide a sound basis for justifying and prioritizing security expenditure. Qualitative analysis is generally scenario-based. The qualitative approach seeks out people's opinions of which risk factors are significant. Qualitative analysis uses simple labels and categories to measure the likelihood and impact of risk. ○ For example, impact ratings can be severe/high, moderate/medium, or low; and likelihood ratings can be likely, unlikely, or rare. Another simple approach is the “Traffic Light” Impact grid. For each risk, a simple Red, Amber or Green indicator can be put into each column to represent the severity of the risk, its likelihood, cost controls and so on. This simple approach can give a quick impression of where efforts should be concentrated to improve security. Business Impact Analysis Business impact analysis (BIA) is the process of assessing what losses might occur for each threat scenario. ○ For instance, if a roadway bridge crossing a local river is washed out by a flood and employees are unable to reach a business facility for five days, estimated costs to the organization need to be assessed for lost manpower and production. Impacts can be categorized in several ways, such as impacts on life and safety, impacts on finance and reputation, and impacts on privacy. BIA is governed by metrics that express system availability. These include: ○ Maximum Tolerable Downtime (MTD) ○ Recovery Time Objective (RTO) ○ Work Recovery Time (WRT) ○ Recovery Point Objective (RPO) Maximum Tolerable Downtime MTD is the longest period that a business function outage may occur without causing irrecoverable business failure. Each business process can have its own MTD, such as a range of minutes to hours for critical functions, 24 hours for urgent functions, 7 days for normal functions and so on. MTDs Vary by company and event. Each Function may be supported by multiple system assets. The MTD sets the upper limit on the amount of recovery time that system and asset owners have to resume operations. ○ i.e. an organization specializing in medical equipment may be able to exist without incoming manufacturing supplies for three months because it has stockpiled inventory. After three months the organization will not have enough supplies and may not be able to manufacture additional products, therefore leading to failure, in this case, the MTD for the supply chain is three months. Recovery Time Objective (RTO) RTO is the period following a disaster that an individual IT system may remain offline. This is the amount of time it takes to identify that there is a problem and then perform recovery ( for example, restore from backup or switch in an alternative system) Work Recovery Time (WRT) WRT represents the time following systems recovery, when there may be added work to reintegrate different systems, test overall functionality, and brief system users on any changes or different working practices so that the business function is again fully supported. Recovery Point Objective (RPO) (RPO) is the amount of data loss that a system can sustain, measured in time. In other words , if a database is destroyed by malware, an RPO of 24 hours means that the data can be recovered (from a backup copy) to a point not more than 24 hours before the database was infected. For example, a customer-leads database for a sales team might be able to sustain the loss of a few hours' or days' worth of data (the salespeople will generally be able to remember who they have contacted and re-key the data manually). Conversely, order processing may be considered more critical, as any loss will represent lost orders and it may be impossible to recapture web orders or other processes initiated only through the computer system, such as linked records to accounting and fulfillment. BIA Endnote Maximum Tolerable Downtime (MTD) and Recovery Point Objective RPO help to determine which business functions are critical and to specify appropriate risk countermeasures. For example, if your RPO is measured in days, then a simple tape backup system should suffice; if RPO is zero or measured in minutes or seconds, a more expensive server cluster backup and a redundancy solution will be needed. Risk Prioritization Risk Mitigation/remediation is the overall process of reducing exposure to, or the effects of risk factors. There are several ways of mitigating risk. If you deploy a countermeasure that reduces exposure to a threat or vulnerability, that is risk deterrence (or reduction). Risk reduction refers to controls that can either make a risk incident less likely or less costly (or perhaps both). ○ For example, if fire is a threat, a policy controlling the use of flammable materials on-site reduces likelihood, while a system of alarms and sprinklers reduces impact by (hopefully) containing any incident to a small area. ○ Another example is off-site data backup, which provides a remediation option in the event of servers being destroyed by fire. Risk avoidance Additional risk response strategies are as follows: ○ Risk Avoidance - This means that you stop doing the activity that is risk-bearing. For example, a company may develop an in-house application for managing inventory and then try to sell it. If while selling it, the application is discovered to have numerous security vulnerabilities that generate complaints and threats of legal action, the company may make the decision that the cost of maintaining the security of the software is not worth the revenue and withdraw from sale. Another example involves the use of an application that creates an automated and convenient resource for the organization that makes the employees jobs exponentially easier, but also introduces a major security vulnerability. Once that vulnerability is discovered the organization may opt to stop using that software. Risk Transference Aside from risk avoidance there is also risk transference Risk transference involves assigning risk to a third party such as an insurance company or contract with a supplier that defines liabilities. For example, a company could stop in-house maintenance of an e-commerce site and contract the services to a third party would be liable for any fraud or data theft Risk Acceptance Risk acceptance (or retention) means that no countermeasures are put in place either because the level of risk does not justify the cost or because there will be unavoidable delay before the countermeasures are deployed. In this case, you should continue to monitor the risk (as opposed to ignoring it). Communication of Risk Factors To ensure that the business stakeholders understand each risk scenario, you should articulate it such that the cause and effect can clearly be understood by the business owner of the asset. ○ A DoS risk should be put into plain language that describes how the risk would occur and, as a result, what access is being denied to whom, and the effect to the business. For example: "As a result of malicious or hacking activity against the public website, the site may become overloaded, preventing clients from accessing their client order accounts. This will result in a loss of sales for so many hours and a potential loss of revenue of so many dollars." Risk Register A risk register is a document showing the results of risk assessments in a comprehensible format. The register may resemble the traffic light grid with columns for impact and likelihood ratings, date of identification, description, countermeasures, owner/route for escalation, and status. A risk register should be shared between stakeholders (executives, department managers, and senior technicians) so that they understand the risks associated with the workflows that they manage. Training and Exercises Part of the risk management framework is ongoing monitoring to detect new sources of risk or changed risk probabilities or impacts. Security controls will, of course, be tested by actual events, but it is best to be proactive and initiate training and exercises that test system security. Training/ Exercises will include: ○ Tabletop Exercises ○ Penetration Testing ○ Red and Blue Team Exercises Table Top Exercises A tabletop exercise is a facilitator-led training event where staff practice responses to a particular risk scenario. The facilitator's role is to describe the scenario as it starts and unfolds, and to prompt participants for their responses. As well as a facilitator and the participants, there may be other observers and evaluators who witness and record the exercise and assist with follow-up analysis. A single event may use multiple scenarios, but it is important to use practical and realistic examples, and to focus on each scenario in turn. These are simple to set up but do not provide any practical evidence of things that could go wrong, time to complete, and so on. Penetration Testing You might start reviewing technical and operational controls by comparing them to a framework or best practice guide, but this sort of paper exercise will only give you a limited idea of how exposed you are to "real world" threats. A key part of cybersecurity analysis is to actively test the selection and configuration of security controls by trying to break through them in a penetration test (or pen test). When performing penetration testing, there needs to be a well-defined scope setting out the system under assessment and limits to testing, plus a clear methodology and rules of engagement. There are many models and best practice guides for conducting penetration tests. One is NISTs csrc.nist.gov/publications/detail/sp/800-115/final. (specific models for penetration testing will NOT be on the CySA+ exam, but knowing what a penetration test is is crucial) Red and Blue (and White) Team Exercises Penetration testing can form the basis of functional exercises. One of the best-established means of testing a security system for weaknesses is to play "war game" exercises in which the security personnel split into teams: Red team - this team acts as the adversary, attempting to penetrate the network or exploit the network as a rogue internal attacker. The red team might be selected members of in-house security staff or might be a third-party company or consultant contracted to perform the role Blue team - this team operates the security system with a goal of detecting and repelling the red team White team - this team sets the parameters for the exercise and is authorized to call a halt if the exercise gets out of hand or should no longer be continued for business reasons. The white team will determine objectives and success/ fail criteria for red and blue teams. The white team also takes responsibility for reporting the outcomes of the exercises, diagnosing lessons learned, and making recommendations for improvements to security controls. CySA+ Lesson 7 Part B Objectives 7B covers the information required for CySA+ exam objectives: ○ Explain the importance of frameworks, policies, procedures and controls Frameworks, Policies, and Procedures Some organizations have developed IT service frameworks to provide best practice guides to implementing IT and cybersecurity. As compliance is a critical part of information security, it is important that you understand the role and application of these frameworks. The use of a framework allows an organization to make an objective statement of its current cybersecurity capabilities, identify a target level of capability, and prioritize investments to achieve that target. Enterprise Security Architecture (ESA) An enterprise security architecture (ESA) framework is a list of activities and objectives undertaken to mitigate risks. Frameworks can shape company policies and provide checklists of procedures, activities, and technologies that should ideally be in place. This is valuable for giving a structure to internal risk management procedures and also provides an externally verifiable statement of regulatory compliance. Prescriptive Frameworks Many ESA frameworks adopt a prescriptive approach. These frameworks are usually driven by regulatory compliance factors. In a prescriptive framework, the controls used in the framework must be deployed by the organization. Each organization will be audited to ensure compliance. Examples of ESA frameworks include: ○ Control Objectives for information and Related Technology (COBIT) ○ International Organization for Standardization (ISO) 27001 ○ Payment Card Industry Data Security Standard (PCI DSS) Inside of these frameworks we use a maturity Model ○ Maturity Model - A component of an ESA framework that is used to assess the formality and optimization of security control selection and usage and address these gaps ○ Maturity models review an organization against expected goals and determine the level of risk the organization is exposed to based on it Risk Based Framework Since prescriptive frameworks can make it difficult for the framework to keep pace with a continually evolving landscape since it is checklist driven, an organization may opt for a risk based approach Risk Based Framework is a framework that uses risk assessment to prioritize security control selection and investment NIST Cybersecurity Framework A relatively new risk based framework that is focused on IT security over IT service provision This framework covers three core areas ○ Framework core - Identifies five cyber security functions (Identify, protect, Detect, Respond, and Recover) and each function can be divided into categories and subcategories ○ Implementation tiers - Assesses how closely core functions are integrated with the organization's overall risk management process and each tier is classed as Partial, Risk Informed, Repeatable, and Adaptive ○ Framework Profiles - Used to supply statements of current cyber security outcomes and target cyber security outcomes to identify investments that will be most productive in closing the gap in cybersecurity capabilities shown by comparison of the current and target profiles. (You want to capture a baseline and see where you are in terms of the framework. If you are at a low quality and you want to be at a high quality you can identify the things that need to be done to get to that high level Audits and Assessments Auditing is an examination of the compliance of an organization to standards set by external or internal entities. These examinations will include: ○ Quality Control and Quality Assurance ○ Verification and Validation ○ Audits ○ Scheduled reviews and continual improvement Quality Control (QC) The process of determining whether a system is free from defects and deficiencies You want to make sure that a car you buy meets the QC standards as any defects or hardware/ technical deficiencies could result in catastrophic events “Bruh, I can’t turn!” Quality Assurance (QA) QA is defined as the processes that analyze what constitutes quality and how it can be measured and checked QA is the actions that happen at the time of putting something together Verification and Validation (V&V) QC and QA takes the form of Verification and Validation (V&V) within software development Verification - A compliance-testing process to ensure that the security system meets the requirements of a framework or regulatory environment, or that a product or system meets its design goals Validation - The process of determining whether the security system is fit for purpose (once the code has been installed does it do what it’s meant to do?) Assessments and Evaluation Assessment - The process of testing the subject against a checklist of requirements in a highly structured way for a measurement against an absolute standard ○ Involves an “absolute standard” and thus, as mentioned above will involve a checklist approach Evaluation - A less methodical process of testing that is aimed at examining outcomes or proving usefulness of a subject being tested ○ Evaluation is more likely to use comparative measurements and is more likely to depend on the judgement of the evaluator than on the checklist or framework Audit A more rigid process than an assessment or evaluation, in which the auditor compares the organization against a predefined baseline to identify areas that require improvement/remediation ○ Audits are generally required in regulated industries such as payment card and healthcare processing ○ These are not based on evaluations or assessments, these are based on known standards. These standards are requirements that everyone has to meet. ○ This is not tailored to the organization like with assessments, your organization must meet the standard. Scheduled Reviews and Continual Improvement The ultimate step in incident response is a "lessons learned" review and report, in which any improvements to procedures or controls that could have helped to mitigate the incident are discussed. A system of scheduled reviews extends this "lessons learned" approach to a regular calendar-based evaluation. Scheduled reviews are particularly important if the organization is lucky enough not to suffer from frequent incidents. To prepare for a scheduled review you will complete a report detailing some of the following: ○ Major incidents experienced in the last period. ○ Trends and analysis of threat intelligence, both directly affecting your company and the general cybersecurity industry. ○ Changes and additions to security controls and systems ○ Progress toward adopting or updating compliance with a framework or maturity model. Continuous Monitoring In contrast to this reactive approach, continuous security monitoring (CSM), also referred to as security continuous monitoring, is a process of continual risk reassessment. Rather than a process driven by incident response, CSM is an ongoing effort to obtain information vital in managing risk within the organization. CSM ensures that all key assets and risk areas are under constant surveillance by finely tuned systems that can detect a wide variety of issues. CSM architecture carefully tracks the many components that make up an organization Network traffic, internal and external communications, host maintenance, or business operations Detective controls Detective controls identify something that has already happened and try to determine the what, when, and how of the event. Examples of detective controls are IDS and audit logs. CySA+ Lessons 8-13 (Quiz 3) CySA+ Lesson 8 Part A: Analyze Output from Enumeration Tools Exam Objectives: Given a scenario, analyze the output from common vulnerability assessment tools. ○ Knowing how to use and how to interpret the output of vulnerability assessment tools will be a significant portion of the CySA+ whether it be posed in question or PBQ format. ○ Knowing what you're looking for, how to search for it, and how to analyze the data from the search will be crucial to identifying potential vulnerabilities in network environments. Performing vulnerability management Vulnerability scanning and management is a critical security task that enables you to identify where your organization may be at risk and how to fix any weaknesses that could lead to any incidents. Vulnerability scanning can be done with the use of passive scanning tools that can analyze network chatter, (Kibana) or active network enumeration tools that probe into the information of key aspects of a network communication infrastructure (nmap). Enumeration Tools The purpose of enumeration is to identify and scan network ranges and hosts belonging to the target and map out an attack surface. Many enumeration tools and techniques involve at least some sort of active connection to the target. Active connections involve transmitting some form of data to the target from the attacking (or analyzing) devices. Active techniques are those that will be discovered if the victim is logging or monitoring the network and host connections Semi-Passive techniques use probes that are difficult to distinguish from legitimate traffic, and using them infrequently and within a range of source addresses so that the scanning cannot be identified without the victims network security software generating many false positives. Passive scanning monitors a network (similar to the technique used by IDSs) But instead of watching for intrusion attempts they can look for signatures of outdated systems and applications. Passive scanning is mostly capable of detecting vulnerabilities that are reflected in network traffic. As well as distinguishing between active and passive methods, you can also identify various classes of reconnaissance and enumeration tools: ○ Open-source Intelligence (OSINT) - These tools query publicly available information, mostly using web and social media search tools. (This can be considered a fully passive approach) ○ Footprinting - These tools map out the layout of the network, typically in terms of IP address usage, routing topology, and DNS namespace (subdomains and hostnames). Footprinting can be performed in active , non-stealthy modes to obtain quick results at the risk of detection or by using slow semi-passive techniques. ○ Fingerprinting - These tools perform host system detection to map out open ports, OS type and version, file shares, current running services and applications, system uptime, and other useful metadata. Fingerprinting can be performed by active, semi-passive, and passive tools Nmap Nmap is the world's most popular open source enumeration tool Nmap can enumerate and discover network topology, hosts, services and OSs In its simplest form you could type in nmap 192.168.0.0/24 (or whatever IP range you would like to scan) ○ This basic syntax will ping and send a TCP acknowledgement packet to port 80 and 443. This will determine if a host is at that IP address. If a host is detected there, nmap will perform a port scan to determine which services are running. Nmap’s default scanning behavior is not stealthy (it scans 1000 ports for each host it finds) and most IDS/IPS and firewalls will see this behavior and attempt to block it. That is why we use host discovery scans Host Discovery Scans would be used with the following syntax: ○ nmap -sn 192.168.0.0/24 -sn is the switch that tells nmap that we would like to do a host discovery scan rather than a default scan. Host discovery scanning will suppress the 1000 port scan that the default scan will perform against hosts.* There are many different switches that can be used for nmap to change the behavior/output of the tool (yes, it’s case sensitive). Examples of these switches are: ○ -sL - Lists the IP addresses from the supplied target range and performs a reverse-DNS query to discover any host names associated with those IPs* ○ -PS - (TCP SYN ping) Probes specific ports from the given list using a TCP SYN packet instead of ICMP (this helps in cases where ICMP packets are blocked in a network). In this case we send out a SYN and receive the SYN ACK telling us the host is there we do not reply back. -- Scan-delay - (Sparse scanning) Issues probes with significant delays to become stealthier and avoid detection by an IDS or IPS -Tn - Issues probes with using a timing pattern with n being the pattern to utilize (0 is the slowest and 5 is the fastest) ○ In practice the syntax of this would be: nmap -T2 192.168.0.1 -sI - This is a stealth scan that makes it appear that another machine started the scan to hide the true identity of the scanning machine -f or –mtu - A technique that splits the TCP header of each probe between multiple IP datagrams to make it harder for an IDS or IPS to detect Nmap output options Nmap can output its scan results in several formats: ○ Interactive - Human-readable output designed to be viewed on screen. ○ Normal (-oN) - Human-readable output directed to a file for analysis later. ○ XML (-oX) - Output using XML formatting to delimit the information ○ Grepable output (-oG) - This delimits the output using one line for each host and tab, slash and common characters for fields. This format makes it easier to parse the output using the grep Linux regular expressions command or any other regex tool). In practice the syntax of outputting to a file would look like: ○ nmap -oX C:\Users\John\Desktop\output.xml 192.168.0.11 Additionally, note that multiple switches can be combined together: ○ nmap -sn -oX C:\Users\John\Desktop\output.xml 192.168.0.24 In the above example we are combining the host discovery scan (-sn) with the XML output (-oX). nmap syntax can get to be quite long depending on the specific task an individual is trying to accomplish Nmap Port scans After an active IP host is found on the network, and the general network topology is collected, the attacker will next try to identify which hosts may be of interest. In the reconnaissance phase they will aim to discover which OSs are in use for the networked devices and hosts and which services are running. (Sometimes even what applications are running those services (This process is called service discovery). Upon completion of a host discovery scan nmap will report on the state of each host for each IP address in the scope. ○ At this point, the attacker can run service discovery scans against one or more of the active IP addresses. One simple example of a port scan would be: nmap -p 80 192.168.100.50 ○ In the above example nmap will probe the device and display the state of HTTP on that device. During the reconnaissance phase an attacker may run into the problem of being detected. Service discovery scans can take minutes or even hours to complete, and IDS can easily be programmed with rules to detect nmap scanning activity and block it. Port scanning switches: ○ -sS - (TCP SYN) - Conducts a half-open scan by sending a SYN packet to identify the port state without sending an ACK packet afterwards. The target's response to the scan's SYN packet identifies the port state. (This is a similar method used to create a DoS event, however in this case we are not sending nearly enough packets to create any significant load on the target host/server) This scan requires you to have administrative access. ○ -sT - (TCP Connect) Conducts a full three-way handshake scan by sending a SYN packet to identify the port state and then sending an ACK packet once the SYN-ACK is received (This will be used if a network card does not support a half-open scan). -sN (Null scan) - Conducts a scan by sending a packet with the header bit set to zero. There is no information in the header that is sent. (This of course is abnormal and most IDS and IPS systems are going to see this and immediately assume that it is malicious) -sF (FIN Scan) - Conducts a scan by sending an unexpected FIN packet. FIN packets are used as a way to end communication sessions, so if you send this in the middle of communication it is unexpected (like Null scans this is abnormal and will send up flags to the IDS/IPS system) -sU (UDP Scan) - Conducts a scan by sending a UDP packet to the target and waiting for a response or timeout. (Since UDP does not have SYN and ACK all it can do is wait for a response or timeout to determine whether or not the port is open or closed. This is a slightly more stealthy method.) -p (Port Range) Port range (-p) conducts a scan by targeting the specific port instead of the 1,000 most commonly used ports that the default nmap scan uses (which takes a long time and is not stealthy). You use -p and specify the port that you want to scan As a pen tester you may have a list of ports that you specifically want to see if they are open because an attacker could potentially exploit them. ○ For example if they are running a web server you may want to see if they are using port 80 This method narrows the possibility that you will be noticed by an IDS/IPS or firewall Of course all of these methods can be combined with one another to increase or decrease their stealth options, as well as output to file. -sX (Xmas Scans) Xmas scans conduct a scan by sending a packet with the FIN, PSH (push) and URG flags set to one. It's called a christmas scan because is lights up your IDS systems and logs like a Christmas tree. If you are running this scan you clearly want to be noticed One reason why a pen-tester would actually use this scan is to see if the organization's analysis team is actually paying attention. Its something easy to send out to see if people will catch it or not. If they don't see this on their systems then they're likely Nmap Port states When you conduct your scan you will have to understand the port states that appear as the output of your scans. These states can tell you what the port is able to do and whether or not you have a vulnerability. These port states are: ○ Open - An application on the host is accepting connections (for example if running a web server Ports 80 and 443 will need to be open to accept web traffic) ○ Closed - The port responds to probes by sending a reset (RST) packet because the application is unavailable to accept connections. (If you're running a web server you have no need to have port 23 open so if you send a packet bound for port 23 its going to send back a reset packet to let you know that the port is closed) ○ Filtered - Nmap cannot probe the port, usually due to a firewall blocking the scans on the network or host (with this nmap doesn't actually know whether the port is closed or not) Additional Filtered port states There are three other states that are displayed if the scan cannot determine a reliable result. These are: ○ Unfiltered - Nmap can probe the port but cannot determine if it is open or closed ○ Open|Filtered - Nmap cannot determine if the port is open or filtered when conducting a UDP or IP protocol scan ○ Closed|Filtered - Nmap cannot determine if the port is closed or filtered when conducting a TCP idle scan (-sI) These three port states are not nearly as common as Open, Closed and filtered. Port states are important to understand because an open port indicates a host might be vulnerable to an inbound connection Nmap Fingerprinting The detailed analysis of services on a host is called fingerprinting, because each OS or application software that underpins a network service responds to probes in a unique way. This allows the scanning software to guess at the software name and version without having privileged access to the host. When open ports are discovered, you can use nmap -sV or -A switch to probe a host more intensively to discover the following information: ○ Protocol - Do not assume that a port is being used for its “well known” application protocol. Nmap can scan traffic to verify whether it matches the expected signature. (HTTP, DNS, SMTP, etc) ○ Application name and version - The software operating the port, such as Apache web server or internet Information Services (IIS) web server. ○ OS type and version - use the -O switch to enable OS fingerprinting (or -A to use both OS fingerprinting and version discovery) ○ Host name ○ Device type - not all network devices are PCs. Nmap can identify switches and routers or other types of networked devices, such as NAS boxes, printers, webcams, and firewalls Nmap comes with a database of application and version fingerprint signatures, classifies using a standard syntax called Common Platform Enumeration (CPE). Unmatched responses can be submitted to a web URL for analysis by the community. hping An open-source spoofing tool that provides a pen tester with the ability to craft network packets to exploit vulnerable firewalls and IDS/IPS With hping you can do things like ○ host and port detection and firewall testing ○ Time stamping ○ Traceroute ○ Fragmentation ○ DoS Host/port Detection and firewall testing This can send a SYN or ACK packet to conduct detection and testing One example of that would be hping3 -S -p80 -c1 192.168.0.1: ○ This will send 1 (-c1) SYN packet (-S) to port 80 ○ This is a stealthy approach Another option would be to use -A: ○ hping3 -A -p80 -c1 192.168.0.1 ○ This will use an acknowledgment packet instead of a SYN packet Timestamping This is used to determine the uptime of the system (How long that devices has been running since its last power cycle) This can allow you to determine which devices are hosts and which are servers as servers will have much longer uptimes than host devices. This would look like: ○ hping -c2 -S p80 –tcp-timestamp 192.168.0.1 ○ Send 2 SYN packets to port 80 to determine uptime. traceroute This will use arbitrary packet formats, such as probing DNS ports using TCP or UDP, to perform traces when ICMP is blocked on a given network. fragmentation Attempts to evade detections by IDS/IPS and firewalls by sending fragmented packets across the network for later reassembly DoS Can be used to perform flood-based DoS attacks from randomized source IPs DoS and fragmentation are not really effective against modern OS and network appliances as they will reassemble the packets and scan them anyway if fragmented and DoS attacks can be blocked. CySA+ Lesson 8 Part C&D: Analyze Output from Infrastructure Vulnerability Scanners This section covers three particular areas of the exam objectives: ○ Given a scenario, utilize threat intelligence to support organizational security. ○ Given a scenario, perform vulnerability management activities ○ Given a scenario, analyze the output from common vulnerability assessment tools While running your vulnerability scans and reviewing the logs of those scans you are going to be bombarded with a high quantity of information. An important factor of vulnerability management lies in the analysis of these reports with the aim of filtering false results and prioritizing critical items for remediation. Vulnerability Scan Reports Vulnerability assessment tools will generate a summary report of all vulnerabilities discovered during the scan. Some of these tools will color-code vulnerabilities in terms of their criticality. (Red typically indicates immediate action is required) These reports will also provide a link to details about the vulnerabilities and how it can be remediated. Report validation techniques In addition to identifying the nature of vulnerabilities detected during a scan, you need to support your overall vulnerability management program by validating the results and correlating what you've learned with other data points in your organization. You need to be able to reconcile the results of a scan with what you know about your environment, as well as what you know about the current security landscape. Not only can this help you validate your current situation, but you can also use this information to determine vulnerability trends that may form over time. The following techniques can be used to validate the accuracy of scan results: ○ Reconcile results ○ Correlate results with other data sources ○ Compare to best practices ○ Identify exceptions Reconciling Report Results Vulnerability scanners can misinterpret the information they get back from their probes. ○ For example, a scan might suggest the presence of an Apache web server in a predominantly Windows environment. You would verify the IP of the server and check that host—perhaps a NAS appliance is operating without your knowledge, or perhaps a software application has installed Apache to work on the local host. If you cannot reconcile a finding, consider running a scan using different software to provide confirmation of the result. Correlating Scan Results With Other Data Sources Reviewing related system and network logs can also enhance the validation process. For example, assume that your vulnerability scanner identified a running process on a Windows machine. According to the scanner, the application that creates this process is known to be unstable, causing the operating system to lock up and crash other processes and services. When you search the computer's event logs, you notice several entries over the past couple of weeks indicating the process has failed. Additional entries show that a few other processes fail right after. In this instance, you've used a relevant data source to help confirm that the vulnerability alert is, in fact, valid. Compare Scan Results to best Practice Some scanners measure systems and configuration settings against best-practice frameworks (a compliance scan). This might be necessary for regulatory compliance or you might voluntarily want to conform to externally agreed standards of best practice. In some cases compliance scans might return results that are not high priority or can be considered low risk. Identify Exceptions In some cases, you'll have chosen to accept or transfer the risk of a specific vulnerability because it fits within your risk appetite to do so. The scanner may still produce this vulnerability in its report. You can therefore mark this item as an exception so that it won't contribute to your remediation plan. ○ For example, a scanner may tell you that port 80 is open on your webserver. Although this is a common attack vector, the port must remain open so that the system can fulfill its functions. Mitigate Vulnerability Issues The results of your vulnerability scans will more often than not require your careful judgement and experience to analyze reports to determine the source of some issues. Don't expect the information to be straight forward or clear cut. You will have to use discernment to draw out the facts of what is being displayed from these scans Remediation and Mitigation Plans Once a vulnerabilities source or nature has been identified and deemed as valid, you will need to determine how to prioritize your response and remediation actions. Using your pre-existing baselines and risk analysis efforts, you'll be able to decide which vulnerabilities are the most critical versus which are the least critical. Scanner reports often give their best guess by scoring each vulnerability item, but this does not typically take into account the various contextual factors that are unique to your environment. Remediation (or risk mitigation) is the overall process of reducing exposure to the effects of risk factors. If you deploy countermeasures that reduces exposure to a threat or vulnerability, that is risk deterrence (or reduction) Risk reduction refers to controls that can either make a risk incident less likely or less costly (or perhaps both) in some cases reports generated by a vulnerability assessment may offer suggestions as to how to fix any detected security issues. Remediation is not always going to involve a quick fix (actually it rarely ever will) it will involve a comprehensive approach to managing the risks that vulnerabilities expose to your organization The ultimate goal of remediation is to move the organization as close as possible to reaching a level of acceptable risk for a given situation. Prioritizing efforts One of the most important preliminary steps in the remediation process is to prioritize your efforts. There are several factors that can affect which problems you choose to tackle and in what order, including how critical the affected system or information is, and how difficult it is to implement the remediation. Having a plan for prioritization will enable you to focus on the most important targets, and so reduce risk as much as possible. Change control Another key step in the remediation process is planning for change control implementation. A change control system may already be in place to manage how changes are applied, whether security-related or otherwise. You need to ensure that you communicate your remediation efforts with personnel who oversee change control so that the process goes smoothly. In some cases, you may need to prove that your suggested changes will have a minimal impact on operations and will fix what they claim to. By conducting sandbox tests on your suggested changes, the organization can be more confident about pushing this remediation to production systems. Risk Acceptance Risk acceptance (or retention) means that no countermeasures are put in place, either because the level of risk doe