CySA+ Lessons 1-4 (Quiz 1) PDF
Document Details
Uploaded by WellRoundedMorganite6487
Tags
Summary
This document provides an overview of different types of security controls, including technical, operational, managerial, preventative, detective, and corrective controls. It also discusses the importance of threat intelligence and security information and event management (SIEM) in a cybersecurity context.
Full Transcript
CySA+ Lessons 1-4 (Quiz 1) CySA+ Lesson 1 Objectives Identify security control types Explain the importance of threat data and intelligenceSOC Security Operations Center (SOC) is a centralized function within an organization employing people, processes and technology to continuously monitor...
CySA+ Lessons 1-4 (Quiz 1) CySA+ Lesson 1 Objectives Identify security control types Explain the importance of threat data and intelligenceSOC Security Operations Center (SOC) is a centralized function within an organization employing people, processes and technology to continuously monitor and improve an organization's security. A SOCs main purpose is to prevent, detect, analyze, and respond to cybersecurity incidents. Usually only larger organizations have a SOC Security Control Categories Categories of controls include: ○ Technical ○ Operational ○ Managerial ○ Preventative (before) ○ Detective (During) ○ Corrective (after) Other types of Controls include ○ Physical ○ Deterrent ○ Compensating Technical Controls Technical controls consist of hardware and software components that protect systems against cyber threats. Examples Include: ○ Firewalls ○ IDS ○ IPS ○ Encryption ○ Identification and authentication mechanisms Operational Controls Operational Controls are the controls implemented by people rather than systems. These controls supplement the security of an organization. They often rely on management activities as well as technical controls. Operational controls include: ○ Labeling ○ Documentation ○ Personnel Security ○ Security Awareness and Training ○ Incident response capability ○ Physical security Managerial Controls Management controls are actions taken to manage the development, maintenance and use of the system, including: ○ System-specific policies ○ Procedure ○ Rules of behavior ○ Individual roles and responsibilities ○ Individual accountability ○ Personnel security decisions. Preventative Controls Preventative controls are designed to be implemented prior to a threat and reduce or avoid a successful threat event. Examples of preventative controls are: ○ Firewalls ○ Encryption ○ Physical Barriers ○ Policies ○ Security awareness training ○ NAC (Network Access Control) ○ Patches/Updates ○ System Hardening Detective Controls Detective controls provide visibility into potential malicious activity, breaches, and attacks on an organization's IT environment. These controls include: ○ Logging of events ○ Security personnel ○ IDS/IPS ○ Cameras ○ Motion sensors ○ Pressure sensors ○ Thermal sensors Corrective Controls Corrective Security Controls are controls that utilize technical, physical and administrative measures to restore the systems or resources to their previous state after a security incident. Physical Controls Implementation of security measures in a defined structure used to deter or prevent unauthorized access to sensitive material Deterrents Controls implemented to discourage attackers from attacking their systems. Other than making the attacker think twice about attacking your system a deterrent control may not do anything else. Compensating Controls Compensating controls (Sometimes called alternative control), is a means to satisfy the requirement for a security measure that is deemed not possible or impractical to implement at the present time. Security Controls Categories Note: Diversifying your controls types is much better than just using one control type and calling it a day. CIA and organizational goals Primary Goal of information security is to maintain all aspects of the CIA triad ○ Confidentiality ○ Integrity ○ Availability You must analyze controls to identify how these goals are met SIEMs Security Information and Event Management (SIEM) is a collection of analysis and security tools that enable threat detections compliance and proper incident response. A SIEM is a place where data converges from different parts of your IT environment to allow for monitoring at a central location Cyber Threat Intelligence Cyber Threat Intelligence (CTI) is an area of cybersecurity that focuses on the collection and analysis of information about current and potential attacks that threaten the safety of an organization or its assets. CTI has 5 phases that seamlessly flow into each other: ○ Requirements ○ Collection and processing ○ Analysis ○ Dissemination ○ Feedback Security Intelligence Cycle—Requirements and Collection Requirements ○ How will security intelligence support business goals? ○ What are the use cases? ○ What resources are required? ○ What constraints apply? Collection and processing ○ Using procedures and tools to determine the sources of data ○ Security information and event management (SIEM) Security Intelligence Cycle—Analysis, Dissemination, and Feedback Analysis ○ Automation tools can be used to analyze data Dissemination Feedback ○ Publish brief or information to spread awareness to the rest of the workforce Threat Intelligence Sources The Intel that is gathered from your sources MUST contain as many of the following characteristics as possible to be considered viable and valuable: ○ Timeliness (Did you receive the information at a reasonable time?) ○ Relevancy (Does the information actually pertain to you?) ○ Accuracy (IS the information actually correct?) ○ Proper confidence Levels Security Control Selection Based on CIA Requirements Every organization has a different set of organizational goals. Its best to analyse the goals and objectives of your organization and select your approach to ensure that goals are met. Does your organization value Confidentiality most? Does your organization value Integrity most? Does your organization value availability most? Proprietary/Closed-Source Intelligence Sources Platforms and feeds Closed source data from private networks, honeypots, research activity,… Open-Source Intelligence Sources Subscription-free feeds and reports Government agencies Platform providers with open-source feeds Blogs and forums to keep up-to-date ISACS Information Sharing and Analysis Centers: ○ Help critical infrastructure owners and operators protect their facilities, personnel and customers from cyber and physical security threats and other hazards. Non-profit organizations that provide a central resource for gathering information on cyber threats as well as allow two-way sharing of information between the private and public sector about root causes, incidents and threats. Threat Intelligence Sharing CySA+ Lesson 2A Objectives Classifying threats and threat actor types Utilize attack frameworks and indicator management Utilize threat modeling and hunting methodologies Threat classification Known threats ○ Malware ○ Documented exploits against software vulnerabilities Unknown threats ○ Zero-day exploits ○ Obfuscated malware code Threat actor Types Threat intelligence must involve developing insight into the behavior of types of adversary groups based on their behavior. It is not enough to identify malware signatures and technical attack vectors only. ○ Organized crime groups, hacktivist groups, and various other forms of threat entities can be monitored to determine who poses relevant threats to your organization ○ Nation-State ○ Organized Crime ○ Hacktivist ○ It is important to identify the level of resources and funding that different adversaries might pose and whether they can develop malware that can evade basic security controls. We will delve into the the specific identifying factors of the following threat actor types: Nation-State Groups with cybersecurity expertise, backed by governments to achieve either military or commercial goals Nation-States have the financial backing and technological skills that fall under the category of APTs An APT (Advanced Persistent Threat) refers to the ongoing ability of an adversary to compromise network security to obtain and maintain access (usually without notice for quite some time) using a variety of tools and techniques Nation-State actors have been implicated in attacks relating to energy and electoral systems. Goals of nation-state actors are primarily espionage and strategic advantage Some states may sponsor multiple adversary groups and these groups may have different objectives, resources, and degrees of collaboration with one another. Organized Crime In some countries cybercrime has far surpassed physical crime in terms of incident and in monetary losses. ○ Organized crime groups don't even have to reside within the same country of those that they target. ○ Typical activities of these groups include fraud and blackmail Hacktivists A hacktivist group will usually use their skills to promote a political agenda or ideology that they are affiliated with. They may attempt to obtain and release confidential information to the public, as well as perform DoS attacks or deface websites.. These groups mostly target political adversaries, media stations, financial groups and companies. Examples of Hacktivist groups include: ○ Anonymous ○ Wikileaks ○ Lulzsec ○ Cyber Partisans Insider Threat Types Many threats operate outside of an organization, having to work their way inside to perform their malicious activities. An insider threat is an actor who already has inside access to the organization that it is targeting, such as employees or contractors. An insider threat may not always be malicious. Insider threats can fall under two categories: ○ Intentional (someone who is purposely executing an attack from within the organization) ○ Unintentional: (someone who through neglect, or an active mistake leads to a vulnerability or data leak within the organization. They can also unwittingly leave attack vectors open for actual malicious actors) One example of an unintentional insider threat is Shadow IT Commodity Malware Code that can be used in general circumstances. These are usually prepackaged and are up for sale on the internet for people to purchase and use. Examples of commodity malware include: ○ RATs (Random Access Trojans) ○ DDoS tools Commodity malware is usually not designed with a specific target in mind. Zero-Day Threats A vulnerability that is discovered or exploited before a vendor can issue a patch to fix it In some cases the vendor may not even know that this vulnerability exists in the first place Zero-days are mainly discovered and exploited by adversary groups as they continually look for vulnerabilities that can be exploited. Security researchers also discover these vulnerabilities and inform the vendors privately so that they can fix them. Zero-day vulnerabilities have significant financial value. A zero-day exploit for a mobile OS can be worth millions of dollars. Due to this an adversary will only use a zero-day vulnerability for high value attacks Advanced Persistent Threat (APT) An APT can either be a group of attackers or a method of attack that a group uses. APTs are highly organized and highly capable groups who have the time and ability to discover exploits for high value targets and maintains persistence in their system. With an APT the adversary is going to remove evidence of their attack while still having a backdoor into the system to allow for them to reconnect and exfiltrate the data that they want at the right time for them. CySA+ Lesson 2B: Utilize attack Frameworks and Indicator Management Classifying threat actor types will provide you basic insight into adversary motivations and capabilities. However more sophisticated tools are needed to provide additional intelligence due to the diversity of threat actors. Frameworks can be used as a basis for identifying and analyzing indicators of compromise (IoC) that provide evidence of attack or intrusion events. Indicators of Compromise (IoC) IoC refers to a residual sign that an asset or network has been successfully attacked. An IoC can be something specific and clearly identifiable like a malware signature, or may require the interpretation of an analyst looking at a set of data (For example a SOC analyst may notice a significantly large file being downloaded from the organization's servers during a time that is unusual or suspicious.) These interpretations are based on the subjective viewpoint of the analyst that is looking at the incoming reports. The analyst must then make a judgement call. It is important to correlate multiple IoCs to produce a more complete and accurate portrayal of events. This is because IoCs are often identified through anomalous activity rather than plainly seen occurrences. IoCs may include: ○ Unauthorized software and files ○ Suspicious emails ○ Suspicious Registry and file system changes ○ Unknown port and protocol usage ○ Excessive bandwidth usage ○ Rogue Hardware ○ Service disruption ○ Suspicious or unauthorized account usage Threat Research Signature based detection may not work against sophisticated adversary tactics because the tools used by the attacker are less likely to be identifiable from a database of known file-based malware. As a whole threat research has moved beyond just using static malware signatures to identify and correlate IoCs Multiple IoCs can be linked to identify a pattern in an adversary's behavior. Analyzing this behavior can be helpful in performing proactive threat hunting in the future. Behavioral threat research Behavioral threat research correlates IoCs into attack patterns. ○ For example, analysis of previous hacks and intrusions produce definitions of the tactics, techniques, and procedures (TTP) used to perform attacks. Some TTPs are as follows: ○ DDoS: A sudden spike of network traffic might be an indicator of this type of attack. ○ Viruses/worms: An increase of CPU or memory usage that causes a noticeable slowdown in device productivity could be a sign of malware. ○ Network Reconnaissance: You may notice scans occurring against ports or a range of IP addresses. This could indicate an attacker is looking for any opening that can be exploited. ○ Data exfiltration: Spikes in network transfers at time when there is usually no/low network traffic could be an indication of someone removing valuable data from the network Kill chain The Kill chain refers to the steps that are usually taken in order for an attacker to reach their objectives. The lockheed martin Kill Chain is broken into 7 steps: ○ Reconnaissance ○ Weaponization ○ Delivery ○ Exploitation ○ Installation ○ Command and control ○ Actions on objectives Kill chain: Reconnaissance In this phase an attacker will attempt to stealthily discover which method is best to attack your system, and which tools to utilize to accomplish the job They will attempt to discover whatever they can about the security systems that a target has in place This phase may involve an attacker using passive information gathering or more active techniques like port scanning and host discovery Of course during this phase they want to evade detection. They will likely make use of a botnet or zombie hosts to scan systems if they opt for active scanning. With this method even if the scan is detected, it will trace back to the zombie host rather than the actual attacker Kill chain: Weaponization The attacker creates the payload code and the exploit code designed for the targets system The payload code will be used to drop the exploit onto the system while the exploit code is what performs the actions/attack against the vulnerability that was discovered in the previous phase. Kill chain: Delivery The attacker identifies the best way to get the weaponized code to the target. Methods include: ○ Email attachments or links ○ USB ○ Text Kill chain: Exploitation If the delivery phase is successful then the weaponized code is executed on the targets system. Kill chain: Installation The weaponized code runs a remote access tool on the system allowing the attacker to maintain persistence within the system. Kill chain: Command and Control (C2/ C&C) The code establishes an outbound connection to a C2 server A C2 server is the server that the attacker is using to control the remote access tool. It can also be used to download additional tools onto the system to help in performing further attacks Kill chain: Actions on objectives The attacker uses the access that they have achieved to collect information from the target system and transfer it to a remote system. (Data exfiltration) Data exfiltration is not the only type of attack that can be performed in this phase though. Since the attacker now has remote access they can perform a number of attacks that would target any aspect of the CIA triad. MITRE ATT&CK Framework Due to the Kill chains focus on perimeter security some organization's feel as if it is too basic to accurately represent modern threat events. Alternatively there is the MITRE Adversarial Tactics, Techniques, And Common Knowledge (ATT&CK) framework. This tags each technique with a unique ID and places it in one or more tactic categories, such as target selection, initial access, persistence, lateral movement, or command and control. The framework can be viewed at attack.mitre.org Diamond Model of Intrusion Analysis Created by Sergio Caltagirone, Andrew Pendergast, and Christopher Betz This model provides a framework to analyze an intrusion event (E) by exploring the relationships between four core features: ○ Adversary ○ Capability ○ Infrastructure ○ Victim The four features are represented by the four vertices of a diamond shaped graph. CySA+ Lesson 2C: Utilize threat modeling and hunting methodologies Threat Modeling Adversary Capability and Attack Surface Multiple frameworks have been established in order to assist in the analysis of information systems. When trying to quantify a risk you may want to ask: ○ How can an attack be performed? ○ What could the potential impact be on the CIA triad? ○ How likely is the risk to be realised? ○ What mitigation strategies are already in place? Threat Modeling Threat modeling is designed to help identify the main risks and Tactics, Techniques, and Procedures that a system may be subjected to by evaluating a system from the perspective of the attacker and from the perspective of the defenders. ○ This will help to assess risks against corporate networks and business systems or specific targets The results of threat modeling assessments can be used to identify which security monitoring and detection systems should be used. Adversary Capability One of the primary stages of threat modeling is identifying where threats are coming from. Determining the type or classification of threats and their ability to effectively target your systems is called “Adversary Capability” MITRE identifies the following levels of capability: ○ Acquired and augmented: Uses commodity malware and existing tools and techniques only. ○ Developed: These threats can identify and exploit a zero day ○ Advanced: Can exploit supply chains to introduce vulnerabilities in proprietary and open source products and plan campaigns that exploit suppliers and service providers ○ Integrated: Can additionally use non-cyber tools, such as political or military assets. Total attack surface This represents all of the points within your network that an attacker could interact with in order to potentially compromise it. Determining the attack surface of your network is extremely important and can be done by taking inventory of your assets deployed in the network, as well as identifying all processes that those assets support Attack vector This is the specific means by which the attacker exploits a vulnerability on the attack surface. MITRE identifies three principal categories of attack vector: ○ Cyber: Use of a hardware or software IT system. Some examples of cyberattack vectors include email or social media messaging, USB storage, compromised user account, open network application port, rogue device, and so on. ○ Human—Use of social engineering to perpetrate an attack through coercion, impersonation, or force. Note that attackers may use cyber interfaces to human attack vectors, such as email or social media. ○ Physical: Gaining local access to premises in order to effect an intrusion or denial of service attack. Threat likelihood Threat likelihood is the probability that an attack will be realised. You can determine the likelihood of a threat by using the following methods ○ Discovering the threat's motivation. What does an attacker stand to gain from conducting an attack? ○ Conducting a trend analysis to identify emerging adversary capabilities and attack vectors. How effective are these attacks, and how have they been exploited before? ○ Determining the threat's annual rate of occurrence (ARO). How often does the threat successfully affect other enterprises? Determining impact means calculating the dollar cost of the threat in terms of disrupted business workflows, data breach, fines and contract penalties, and loss of reputation and customer confidence. Proactive threat hunting Unlike reactive threat hunting, which begins its process as a result of an attack that has already occurred, proactive threat hunting utilizes threat research done beforehand to discover whether there is evidence of TTPs present within a system or network. ○ Proactive threat hunting can be done by using Open source intelligence that can be found through trusted mediums online or through exchanging information with other organizations. ○ This helps with improving detection capabilities since analysts can practice their vulnerability management skills before an attack occurs. ○ The threats discovered via proactive threat hunting can also help incorporate signature based detection service providers with new sources to incorporate into their systems If threat hunting identifies attack vectors that were previously unknown this can help analysts reduce the attack surface area and block attack vectors. Proactive threat hunting is done with an “assume breach” mindset. ○ This means that the analyst approaches their hunt with the acknowledgement that a breach could have already occurred within their network and that they must discover it, or how it could have potentially occurred. Assume Breach will help focus on addressing gaps in ○ Detection of attack and penetration ○ Response to attack and penetration ○ Recovery from data leakage, tampering or compromise ○ Prevention of future attacks and penetration. OSINT Open Source Intelligence (OSINT) is publicly available information and tools for aggregating and searching said information. Some sources of OSINT include: ○ Publicly available information such as IP addresses of an organization's DNS servers, names, email addresses, phone numbers, physical addresses. Some of this easily accessible data can then be used to discover additional information about an individual or organization. ○ Social media sites like facebook, linkedin can be used to mine for an organization's information. Depending on how much an employee shares about the company they work in you can discover a lot of information. ○ HTML code: the HTML code of an organization's web page can provide information such as IP information names of web servers, operating system versions, file paths and even the names of developers ○ Metadata: attackers can use metadata to identify information that a person doesn't think could be used to help identify further information about them. CySA+ Lesson 3A The Objectives This lesson goes over information for objectives 3.1 and 4.4 which state: 3.1 Given a scenario, analyze data as part of security monitoring activities. 4.4 Given a scenario, utilize basic digital forensics techniques of abnormal activity. You must be able to analyze packet data and traffic-monitoring statistics to identify indicators Network Forensics analysis Tools Network traffic can be captured from either a host or from network segments. ○ Using a host means that only traffic directed at that host is captured. ○ Capturing a network segment involves capturing the traffic within the sub-networks (Subnets, VLANs) Network analysis tools include: ○ Switched Port Analyzer (SPAN) ○ Test Access Port (TAP) Switched Port Analyzer (SPAN) Sometimes called “Port Mirroring” or “Port Monitoring” selects network traffic for analysis by a network analyzer. SPAN is a dedicated port on a switch that takes a mirrored copy of network traffic from within the switch to be sent to a destination. (Typically a monitoring device or analysis tools) ○ In short, It copies and sends network packets transmitted as input from a port to another port. ○ There may be more than one source port but only one destination port in a SPAN session. The switch copies frames after transfer of frames out of port and then sends it to a network analyzer. Network (Test Access Points) TAPS Also called Terminal Access Points, these hardware based tools are used for acquiring network traffic as they sit in a network segment, between two appliances (router, switch or firewall) Allows you to access and monitor traffic TAPs transmit both the send and receive data streams simultaneously on separate dedicated channels ensuring all data arrives at the monitoring device in real time. TAPs have no IP address, no MAC address and cannot be hacked. tcpdump Tcpdump is a command line packet capture utility for Linux (windump is the windows version of tcpdump) ○ Basic syntax of the command is “tcpdump -i eth”, where “i” stands for “interface” and “eth” is the interface to listen on. ○ The utility will display captured packets until halted manually (Ctrl + C) Tcpdump switches Like most other command line tools, tcpdump utilizes switches that change the functionality of the tool based on what is input. Wireshark Wireshark is an open source graphical packet capture utility, with installer packages for most operating systems. Wireshark output is broken into 3 separate “panes” for displaying output. ○ The top pane shows each frame ○ The middle pane shows the fields from the currently selected frame ○ The bottom pane shows the raw data from the frame in hex and ASCII Wireshark is capable of interpreting the headers and payloads of hundreds of network protocols Wireshark uses the same syntax as tcpdump, but expressions can be built via the GUI tools as well. Output can be saved to a.pcap file. Files can be loaded for analysis. Top pane shows each frame, Middle shows info for the currently selected frame. Bottom shows raw data in hex and ASCII The “Follow TCP Stream” context command can reconstruct the contents of a packet within a TCP session. Packet Analysis Packet Analysis refers to a frame-by-frame inspection of captured frames. Can view MAC and IP source and destination addresses Packet analysis can be used to detect whether packets passing over a standard port have been manipulated in some nonstandard way. (For example beaconing to a C2 server) You can inspect protocol payloads to try to identify data exfiltration attempts You can view attempts to contact suspicious domains or websites Protocol Analysis Whereas packet analysis means looking at the headers and payloads of frames within a packet capture, Protocol analysis means using statistical tools to analyse a sequence of packets. Protocol Analysis analyzes statistics for a whole packet transmission to assist in the identifying individual frames for closer packet analysis. ○ Contents and metadata of protocol sessions can reveal insights when packet context might not be available (like if the packet contents are encrypted) By reviewing the data of the session, certain things can be inferred or deduced about the traffic type: ○ A brief exchange of small payloads with consistent pauses between each packet might be inferred as an interactive session between two hosts ○ A sustained stream of large packets might infer a file transfer. You should be alert to changes in relative usage of protocols. ○ This requires a baseline for comparison so that you can identify anomalous values ○ For example if you suddenly notice multi-gigabyte transfers after office hours you may want to investigate the cause and the contents of that transmission. The Wireshark protocol Hierarchy Statistics report shows which protocols generated the most packets or consumed the most bandwidth. IP Address and DNS Analysis One of the principal areas of interest when analyzing traffic for signs of compromise is access requests to external hosts since many intrusions rely on a C2 server to download additional attack tools and to exfiltrate data. Very valuable area to analyze because you can correlate IP addresses, domains, and URLs you see in local network traffic with reputation tracking whitelists and blacklists via a SIEM One thing worth noting is that historically, malware would be configured to contact a C2 server using a static IP address or range of IP addresses, or DNS names. This type of beaconing is not highly effective because the malicious addresses can be identified quite easily, blocked and the malware located and destroyed. These types of attacks can be identified by correlating the destination address and domains. information from packet traces with a threat intelligence feed of known-bad IP addresses Malware these days have switched to domains that are dynamically generated using an algorithm called Domain Generation Algorithm (DGA) Domain Generation Algorithm 1.To avoid detection malwares now use DGAs that work as follows: a. The attacker sets up one or more dynamic DNS (DDNS) services, typically using fake credentials and payment methods. b. The malware code implements a DGA to create a list of new domain names. The DGA uses a base value (seed) plus some time - or counter- based element, or something else entirely random. c. A parallel DGA, coded with the same seed and generation method is used to create name records on the DDNS service. This will produce a range of DNS names that may consist of a random string of characters. d. When the malware needs to initiate a C2 connection, it tries a selection of the domains it has created. e. The C2 server will be able to communicate a new seed (base value) to generate a new domain DGA can be combined with techniques to continually change the IP address that a domain name resolves to. This continually- changing architecture is referred to as FastFlux. Uniform Resource Locator Analysis URLs don’t only point to a host or service location on the internet. They can also encode some action or data to submit to the server host. This is a common vector for malicious activity. URL analysis is performed to identify what malicious script or activity might be encoded within it. There are various methods that can be used to identify malicious behaviours by processing the URL within a sandbox. Methods include: ○ Resolving percent encoding ○ Assessing what sort of redirection the URL might perform ○ Showing source code for any scripts called by the URL without executing them. Percent Encoding Percent encoding allows a user to submit any unsafe character to the server within the URL. Its legitimate uses are to encode reserved characters within the URL when they are not part of the URL syntax and to submit Unicode characters. Percent encoding can be misused to obfuscate the nature of a URL and submit malicious input as a script or binary or to perform directory traversal. Percent encoding can exploit weaknesses in the way the server application performs decoding. URLs that make unexpected or extensive use of percent encoding should be treated carefully. A URL can contain only unreserved and reserved characters from the ASCII set. Unreserved Characters are: ○ a-z A-Z 0-9 -. _ ~ Reserved ASCII characters are used as delimiters within the URL syntax and should only be used when unencoded for their purposes. The reserved characters are: ○: / ? # [ ] @ ! $ & ( ) * + , ; = There are also unsafe characters, which cannot be used in a URL. Control character such as: ○ \ < > { } (TAB key) (End of File control) You can use a resource such as W3Schools for a complete list of character codes. Here are some of them: Full list: https://developer.mozilla.org/en-US/docs/Glossary/Percent-encoding HTTP Methods As part of URL analysis it's important to understand how HTTP operates. ○ An HTTP client makes a request to the HTTP server, beginning the TCP session. ○ The connection between the client and server can be used for multiple requests or a client can start a new TCP connection for different requests. ○ GET - Is used to retrieve a resource ○ POST - Sends data to the server ○ PUT - Creates or replaces a resource ○ DELETE - can be used to remove the resource ○ HEAD - Retrieves the headers for a resource only (Not the body) HTTP Response Codes The server response comprises the version number and status code and message plus optional headers, and message body. An HTTP response code is the header value returned by a server when a client requests a URL. Response codes are as follows: ○ 200 - This indicates a successful GET or POST request (OK) ○ 201 - Indicates where a PUT request has succeeded in creating a resource ○ 3XX - codes in this range indicate a redirect, where the server tells the client to use a different path to access the resource ○ 4XX - Codes in this range indicate an error in the client request, such as requesting a nonexistent resource (404), not supplying authentication credentials (401), or requesting a resource without sufficient permission (403). Code 400 indicates a request that the server could not parse. ○ 5XX - Codes in this range represent a server side error. Such as general error (500), overloading (503), If the server is acting as a proxy, messages such as 502 (bad gateway and 504 (gateway time out indicate an issue with the upstream server. Its best to learn the categories of these HTTP response codes: ○ 1XX - Informational ○ 2XX - request success ○ 3XX - redirectional ○ 4XX - Client Error ○ 5XX - Server Error Not every system will use the same format for displaying their HTTP logs. Its advisable to be able to draw out information from multiple sources of logs. Following the HTTP stream on Wireshark will also allow you to see the client request, and the server's response to the client's request. CySA+ Lesson 3 Part B Firewall Log Review A firewall is designed to provide you with a line of defense at the network's boundary to control the types of traffic that can flow in and out of a network. This is done with Access control lists: ○ Access control lists are lists of permitted or denied network connections based on either IP addresses, ports, or applications in use. Firewall logs can provide you with four types of useful security data: ○ Connections that are permitted or denied ○ Port and protocol usage in the network ○ Bandwidth utilization with the duration and volume of usage ○ An audit log of all of the address translations (NAT/PAT) that occurred Firewall log formats Firewall logs are usually vendor specific, but we will look at two examples of firewall logs: ○ IP tables - A linux - based firewall that uses the syslog file format for its logs ○ Windows Firewall - uses W3C Extended Log File Format Iptables logs This is what an iptable log output looks like. Jan 11 05:53:09 lx1 kernel: iptables INPUT drop IN=eth0 OUT= MAC=00:15:5d:01:ca:55:00:15:5d:01:ca:ad:08:00 SRC=10.1.0.102 DST=10.1.0.10 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=4005 DF PROTO=TCP SPT=2584 DPT=21 WINDOW=64240 RES=0x00 SYN URGP=0 Iptables logs Jan 11 05:53:09 lx1 kernel: iptables INPUT drop IN=eth0 OUT= MAC=00:15:5d:01:ca:55:00:15:5d:01:ca:ad:08:00 SRC=10.1.0.102 DST=10.1.0.10 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=4005 DF PROTO=TCP SPT=2584 DPT=21 WINDOW=64240 RES=0x00 SYN URGP=0 Each log entry is prefixed with a time stamp, a host name and the kernel where the message is being generated from. Iptables logs Jan 11 05:53:09 lx1 kernel: iptables INPUT drop IN=eth0 OUT= MAC=00:15:5d:01:ca:55:00:15:5d:01:ca:ad:08:00 SRC=10.1.0.102 DST=10.1.0.10 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=4005 DF PROTO=TCP SPT=2584 DPT=21 WINDOW=64240 RES=0x00 SYN URGP=0 Each log rule will then have a log level which tells you which rule is being applied when the event was logged. In this case a drop rule is being applied Iptables logs Jan 11 05:53:09 lx1 kernel: iptables INPUT drop IN=eth0 OUT= MAC=00:15:5d:01:ca:55:00:15:5d:01:ca:ad:08:00 SRC=10.1.0.102 DST=10.1.0.10 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=4005 DF PROTO=TCP SPT=2584 DPT=21 WINDOW=64240 RES=0x00 SYN URGP=0 From here we will see things listed in the format of = Firstly displayed are the interfaces involved From the fact that IN=eth0 is the only thing displayed we can deduce that this is something attempting to get into our network Iptables logs Jan 11 05:53:09 lx1 kernel: iptables INPUT drop IN=eth0 OUT= MAC=00:15:5d:01:ca:55:00:15:5d:01:ca:ad:08:00 SRC=10.1.0.102 DST=10.1.0.10 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=4005 DF PROTO=TCP SPT=2584 DPT=21 WINDOW=64240 RES=0x00 SYN URGP=0 Next we have the MAC addresses of the interface the packet is arriving on Iptables logs Jan 11 05:53:09 lx1 kernel: iptables INPUT drop IN=eth0 OUT= MAC=00:15:5d:01:ca:55:00:15:5d:01:ca:ad:08:00 SRC=10.1.0.102 DST=10.1.0.10 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=4005 DF PROTO=TCP SPT=2584 DPT=21 WINDOW=64240 RES=0x00 SYN URGP=0 Next, the source and destination IP addresses are listed Iptables logs Jan 11 05:53:09 lx1 kernel: iptables INPUT drop IN=eth0 OUT= MAC=00:15:5d:01:ca:55:00:15:5d:01:ca:ad:08:00 SRC=10.1.0.102 DST=10.1.0.10 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=4005 DF PROTO=TCP SPT=2584 DPT=21 WINDOW=64240 RES=0x00 SYN URGP=0 SPT=2584 Denotes a source port of 2584 DPT=21 Denotes a destination port of 21 Windows Firewall As noted, Windows-based firewall uses W3C Extended Log File Format This is a self describing format with a comment at the top of the log listing the fields used. Each log entry contains the values for those fields, in a space delimited format. Windows Firewall Logs c-ip: client IP (Where the request came from) Cs-method: the action being taken by the client (for example GET or POST) s-IP: Server IP address (where the request landed) cs-uri-stem: the file being requested cs-uri-query: the query being performed by the client s-port: port number being accessed by the client cs-username: the name of the user accessing the server (if not an authenticated user, it will be represented by a hyphen in the statement SC- Status is going to be the code that is generated from the server in response to the request. Black Holes In network architecture, a black hole drops traffic before it reaches its intended destination and without alerting the source. ○ For instance, if network traffic is directed to an IP address that has been assigned to a non-existent host, it will be discarded, meaning the inbound traffic will be ignored as if it disappeared into a figurative black hole. A common and effective way to use black holes is by dropping packets at the routing layer to stop a DDoS attack If you know the source address range of a DDoS attack, you can silently drop that traffic by configuring the router to send the attacking range to null0 (an interface that automatically drops all traffic) Sinkholes A sinkhole is similar to configuring a black hole, and the terms are often used interchangeably. The difference with a sinkholing is that you can retain some ability to analyze and forward the captured traffic. Sinkhole routing can be used as a DDoS mitigation technique so that the traffic flooding an IP address is routed to a different network where it can be analyzed. One advantage of this is to identify the source of the attack and devise rules to filter it. CySA+ Lesson 3 Part C Analyze Endpoint Monitoring Output In addition to network based monitoring tools, security can be supplemented with host-based monitoring. Analysts must be able to use tools to identify behavioral anomalies, and identify the techniques used by malware code to achieve privileges and persistence on network hosts Endpoint Data Collection and Analysis tools include: ○ AV ○ HIDS/HIPS ○ EPP ○ EDR ○ UEBA Anti- Virus (A-V) The first generation of A-V software is characterized by signature based detection and prevention of known viruses. Products move quickly to perform generalized malware detection, meaning not just viruses but worms, Trojans, spyware and so on. While A-V software remains important, it is widely recognized as being insufficient for preventing data breaches Anti Virus (Heuristic Based) Heuristic based detection will utilize the signatures of malware as input in order to learn how to determine whether or not something is malicious based off of its characteristics rather than simply its signatures ○ Employs the use of Artificial Intelligence and Machine learning ○ Discovers for itself what malware may look like rather than only being able to make 1 to 1 comparisons Host-Based intrusion Detection/Prevention (HIDS/HIPS) HIDS provide signature-based detection via log and file system monitoring. HIDS come in many different forms with different capabilities, some preventative (HIPS). File system integrity monitoring uses signatures to detect whether a managed file image, such as an OS system file, driver, or application executable, has changed. Most HIDS can also analyze system log files. Products may also monitor ports and network interfaces, and process data and logs generated by specific applications such as HTTP or FTP. Endpoint Protection Platform (EPP) Endpoint protection usually depends on an agent running on a local host. If multiple security products install multiple agents (For example one for an A-V, one for HIDS and another for a firewall), they can impact system performance and cause conflicts, creating false positives, leading to technical support incidents. An EPP is a single agent performing multiple security tasks, including malware/intrusion detection and prevention/ secure search and browsing, DLP enforcement, and file/message encryption. In an enterprise solution, there will also be single management dashboard for configuring and monitoring hosts. Many EPP solutions are cloud-managed, allowing the continuous monitoring and collection of activity data, along with the ability to take remote remediation actions, whether the endpoint is on the corporate network or outside of the office. These solutions are cloud-data assisted, meaning the endpoint agent does not have to maintain a local database of all known IOCs, but can check a cloud resource to find the latest verdicts on objects that it is unable to classify. Endpoint Detection and Response (EDR) Where EPP provides mostly signature-based detection and prevention, endpoint detection and response (EDR) is focused on logging of endpoint observables and indicators combined with behavioral, and anomaly, based analysis. The aim is not to prevent initial execution, but to provide real-time and historical visibility into the compromise, contain the malware within a single host, and facilitate remediation of the host to its original state. User and Entity Behavior Analytics (UEBA) UEBA solutions are less an endpoint data collection technology and more of the analysis process supporting identification of malicious behaviors from comparison to a baseline. The analytics software tracks user account behavior across different devices and cloud services. Entity refers to machine accounts, such as client workstations or virtualized server instances, and to embedded hardware such as IoT devices. The complexity of determining baselines and reducing false positives means that EUBA solutions are heavily dependent on advanced computing techniques, such as artificial intelligence (AI), and machine learning. Examples include: Microsoft's Advanced Threat Analytics: ○ (docs.microsoft.com/en-us/advanced-threat-analytics/what-is-ata) Splunk UEBA ○ (splunk.com/en_us/software/user-behavior-analytics.html) Sandboxing for Malware Analysis Sandboxing is a technique that isolates untrusted data in a closed virtual environment to conduct tests and analyze the data for threats and vulnerabilities. Sandbox environments intentionally limit interfacing with host environments to maintain the hosts’ integrity. Sandboxes are used for a variety of purposes, including for testing application code during development and analyzing potential malware. The analysis of files sent to a sandbox environment can include determining whether the file is malicious, how it might have affected certain systems if run outside of the sandbox, and what dependencies it might have with external files and hosts. Sandboxing offers more than traditional anti-malware solutions because you can apply a variety of different environments to the sandbox instead of just relying on how the malware might exist in your current configuration/ environment. To effectively analyze malware, sandboxes should provide the following features: ○ Monitor any system changes without direct user interaction ○ Execute known malware files and monitoring for changes to processes and services. ○ Monitor all system calls and API calls made by programs. ○ Monitor network sockets for attempted connections such as using DNS for Command and Control ○ Monitor program instructions between system and API calls ○ Take periodic snapshots of the environment ○ Record file creation/ deletion during the malwares execution ○ Dump the virtual machines memory at key points during execution The sandbox host that the malware is installed to should be physically or logically isolated from the main network. The host must not be used for any purpose other than malware analysis. This is often achieved using a Virtual Machine (VM) Additional host patch management and network access control precautions must be taken as there are vulnerabilities in hypervisors that can be exploited by malware A complex honeypot type of lab might involve the use of multiple machines and internet access, to study closely how the malware makes use of C&C and the domains or IP addresses it is communicating with. Care must be taken to ensure that the lab systems are not used to attack other internet hosts Reverse Engineering and Analysis Methods and Tools Reverse engineering is the process of analyzing a systems or applications structure to reveal more about how it functions. In the case of malware being able to examine the code that implements its functionality can provide you with information about how the malware propagates and what its primary directives are. Similar to a person's handwriting you may be able to attribute the source of the malware according to how the code is written or the recognizable patterns that its execution follows. Some malware is easier to deconstruct than others for example, the nature of the class files in the java programming language allows them to be easily decompiled into source code. Apps written in java can be reverse engineered with freely available, easy-to-use tools Additionally there are two methods of malicious code analysis: Static Analysis: With static analysis you will be reading the code of the malicious application in an attempt to understand what its goals are. Also you may be able to find IP addresses of C2 servers that will be communicated with by the malicious program. Dynamic analysis: In this case the analyst will allow the malicious application to run within a sandbox environment and they will use analysis tools to discover information about its goals and directives. DomainKeys Identified Mail (DKIM) DomainKeys Identified Mail (DKIM) provides a cryptographic authentication mechanism. This can replace or supplement SPF. To configure DKIM, the organization uploads a public key as a TXT record in the DNS server. When outgoing email is processed, the domain MTA calculates a hash value on selected message headers and signs the hash using its private key. The hash value is added to the message as a DKIM signature, along with the sequence of headers used as inputs for the hash, the hash algorithm, and the selector record, to allow the receiving server to locate the correct DKIM DNS record. (DMARC) The Domain-Based Message Authentication, Reporting, and Conformance (DMARC) framework ensures that SPF and DKIM are being utilized effectively. A DMARC policy is published as a DNS record. It specifies an alignment mechanism to verify that the domain identified in the rule header from field matches the domain in the envelope from field (return-path) in an SPF check and/or the domain component in a DKIM signature. DMARC can use either SPF or DKIM or both. DMARC specifies a more robust policy mechanism for senders to specify how DMARC authentication failures should be treated (flag, quarantine, or reject), plus mechanisms for recipients to report DMARC authentication failures to the sender. ○ Recipients can submit an aggregate report of failure statistics and a forensic report of specific message failures. CySA+ Lesson 4 A&B SIEM Deployment It usually takes a combination of multiple events to reveal a security incident ○ Information that may mean nothing independently, could paint a picture of a security incident when put together. ○ IE: Logs show that an employee working remotely has logged in from home, at 11:00 am, but sign in logs show that they have used a key-card to access a part of the building locally at 11:05 am on the same day. The SIEM tools used to analyse the logs and alerts generated by the network applications will usually be configured with the following considerations in order to be effective: ○ Logging relevant events while ignoring irrelevant data ○ Documenting the scope of events ○ Have clearly defined rules of what is considered a threat and what is not ○ Planning a course of action to take in the event that there is an alert of a threat event ○ Have a ticketing system in order to track all flagged events ○ Supplement automated detection with proactive threat hunting Splunk Splunk is one of many SIEM tools* It imports generated data and analyzes it in real time using searches written in Splunk’s Search Processing Language (SPL) Results of searches can be presented using visualisation tools in custom dashboards and reports, or configured as triggers for alerts and notifications. Security Data Collection and Use Cases When configuring a SIEM it is important to make sure that you will always have information that covers all angles of potential threat events: ○ When the event started ○ Who was involved in the event ○ What occurred in the event ○ Where did the event occur (And what was the area of effect) ○ Where did the event originate (Inside or outside? Local IP or foreign IP?) Security Data Normalization Data normalization involves the process of transforming various data from its original formats into a standardized format. ○ Since data is coming from so many different sources (scanned by humans, or software) all of these different formats will need to be reformatted or at least restructured to facilitate the scanning and analysis of the data. SIEMs will collect data generally in these forms: ○ Agent based: An agent is installed on all hosts on the network. As events occur, all of the information is shared with a host, and sent to the SIEM server for analysis and storage. ○ Listener/Collector: This implementation will not install a client on hosts, rather hosts will be configured to send updates to the SIEM using protocols like Syslog or SNMP. ○ Sensor: The SIEM can collect packet transfers and traffic flow from data sniffers. Syslog The syslog protocol, which uses port 514, is a way for network devices to use a standard message format to communicate with a logging server Computer systems will use it to send event data logs to a central location for storage. Logs can then be accessed by analysis and reporting software to perform audits, monitoring, troubleshooting, and other necessary tasks. Analysis and detection methods. Various rules can be applied to data inputs and outputs to generate alerts from SIEM tools. The simplest forms of correlation to to configure policies around are signature based detection rules, also called “Conditional analysis” ○ This format uses the “IF x AND (y OR z)” format to generate alerts ○ Most rule-based SIEM correlation systems can generate a large number of false positives, and are useless against zero-day attacks. Heuristic Analysis and Machine Learning The standard rule-based format can be improved by using heuristic analysis or machine learning. This allows machines to not only make decisions based on rules or data inputs, but it also allows them to independently infer information that may not be explicitly given to it. The software can use techniques to determine whether a set of data points are similar enough to the rule for an alert to be generated even though the event doesn't match the rule exactly. Essentially the software learns for itself what characteristics to look for in events rather than just paying attention to events that match signatures or conditions directly. Behavioral Analysis This can also be called “Statistical” Or “profile based detection” An engine is trained to recognize baseline traffic (traffic under normal conditions) With the information of what normal traffic looks like it can then determine whenever there are deviations from the norm, and will send out alerts. Anomaly analysis is useful because you don't need to rely on known malicious signatures to identify something unwanted in your organization, as this can lead to false negatives. Anomaly Analysis Anomaly based analysis will define an expected outcome to an event, and then identify any events that do not follow these patterns. Once rules have been set, if the network traffic or host-based events do not conform to the rules then the system will note the event as an anomalous event. Trend Analysis Trend Analysis will detect patterns within a dataset over a time period. Those patterns will be used to make predictions on future events. Not only can this help you potentially predict attacks that may be imminent, but it can also help you to identify the indicators that lead up to an attack if an attack actually occurs. Trend analysis will look at the frequency, and volume of events, as well as any statistical deviation.