Intelligence Life Cycle-Analysis-Final PDF

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Summary

This document is about intelligence analysis, covering structured analytic techniques, like analyzing competing hypotheses, and frameworks used in cyber threat intelligence.

Full Transcript

Intelligence Life Cycle-Analysis-Final Hello, and welcome to the analysis module of the Cyber Threat Intelligence Lifecycle course. I'm Sharon Fladograf, and I'll be your instructor. In this module, we will learn about several structured analytic techniques used by cyber threat intelligence analysts...

Intelligence Life Cycle-Analysis-Final Hello, and welcome to the analysis module of the Cyber Threat Intelligence Lifecycle course. I'm Sharon Fladograf, and I'll be your instructor. In this module, we will learn about several structured analytic techniques used by cyber threat intelligence analysts, including analysis of competing hypotheses, signposts of change analysis, cross-impact matrix, and a bonus technique. Threat casting. We will also cover the main models and analytic frameworks that make up cyber threat intelligence tradecraft, including the Lockheed Martin kill chain, the diamond model, and MITRE ATT&CK. We will then touch on what those buzzwords, observables and indicators mean, and dip our toes into the attribution process, otherwise known as solving the who in whodunit. Just to orient you, here's where we are in the intelligence cycle. As mentioned in the intro module, we're not covering processing and ingestion because that is a very technical or data-heavy topic. Also, we'll be covering analysis and production separately to ensure each gets the attention it deserves. Let's assume we've written our intelligence requirements, defined our collection plan, and gone out and collected a bunch of information and data that has been standardized and ingested into a tool that allows us to easily display it all in a readable format. Now it's time to put on our thinking caps and analyze. Start with context. Contextual cyber threat intelligence seeks to identify specific threats which are more likely to target a given organization over another. It also seeks to define an adversary's exploit methodology and motivation. While it can never provide absolute certainty with regard to future threats, it can be of tremendous benefit in reducing uncertainty to support well-informed cybersecurity-focused business. Intelligence analysis requires consuming the collected and exploited information that has been collected under the direction of the decision makers and aggregating it into a cohesive intelligence product. It is the analyst's job to conduct a thorough assessment of the validity and reliability of the sources in order to properly weigh the aggregated information to derive a conclusion that is sound. There are a number of things the analyst must consider when making these types of decisions, such as validating sources and triaging information, as we talked about in the last module. The analyst must also consider the context surrounding the information and how that can affect how the information is viewed in regards to a particular threat. To do this, there are also a number of different analytic techniques that can be employed, which we'll talk about next. Intelligence analysis is typically done within the sphere of expert judgment, meaning it's the compositive expertise based on the study of empirical evidence, case studies, and personal experience all compiled and connected through critical thinking. Structured analytic techniques are an approach to analysis that came about after the U.S. intelligence community underwent major analytic reforms in response to the 9-11 attacks to combat the pitfalls of faulty or unreliable intelligence products. Its adoption has been industry-wide throughout the U.S. intelligence community and has made its way into the private sector as more and more companies stand up their own threat intelligence teams. Structured analysis is an approach that allows for analysis to be externally worked through allowing for critiquing, review, and discussion. In doing so, cognitive pitfalls can be mitigated through the understanding of the process. This process builds in the analyst's memory and provides a roadmap for additional analysts to follow, as well as avoid cognitive bias. Cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own subjective reality from their perception of the input. An individual's construction of reality, not the objective input, may dictate their behavior in the world. Cognitive bias is not always negative, but it can cloud our judgment and affect how clearly we perceive situations, people, or potential risks. Analytic methods, such as the ones listed on this slide, that allow for, and in some cases encourage collaboration, combats a single point of failure and instead appeals to diverse opinions, expertise, and creativity. In short, to be a good analyst, you have to be willing to admit you are wrong and be able to work well with others and enjoy group projects. Our first technique, analysis of competing hypotheses, is exactly what it sounds like. Analysts identify several mutually exclusive explanations of the circumstances of an event. Analysts then take steps to refute each hypothesis based on the evidence at hand as best they can. Working to refute, vice confirm a hypothesis helps avoid confirmation bias, where one might give undue weight to evidence that supports a preexisting belief. This process is particularly useful in denial and deception instances, which are common in cyber, as well as identifying potential indicators that would be expected for each estimate. The ACH process involves several key steps. One, identifying hypotheses. Begin by listing all possible explanations for the situation or event in question. Two, evaluating evidence. Assess the evidence and determine how consistent it is with each hypothesis. Three, refining hypotheses. Eliminate hypotheses that are clearly inconsistent with the evidence. Four, matrix creation. Create a matrix where hypotheses are compared against the evidence. Five, reaching a conclusion. Based on the matrix, identify which hypothesis is most likely to be true, acknowledging any remaining uncertainties. Let's consider a hypothetical case study. Suppose an intelligence analyst is tasked with assessing the likelihood of a foreign government undergoing a significant political shift. The ACH process might start with hypotheses ranging from no change to peaceful transition to violent overthrow. The analyst would then evaluate evidence such as economic conditions, public protests, and government stability. In this slide, you can see that the analysts at Proofpoint are examining a new phishing campaign and trying to determine the threat actor motivations behind it. They've put forth several hypotheses in regards to what the threat actors might be trying to gain from the campaign, along with what this new campaign might mean for the overall strategic goals of the group. Our next framework, signposts of change analysis. Analysts can create lists of indicators based on previous threat profiles and the like that they would expect to see in a developing situation or event, such as a DDoS attack. Analysts can round table this discussion to determine what types of information they expect to see to diagnose a developing incident. Incomplete lists of indicators can be added to an intelligence product to inform business officers on what they might watch for should an event unfold later on. Sometimes these indicators are called flags or gates, and when they show up, that is considered a checkpoint to alert the network or business monitors that an incident is possibly developing. For instance, in geopolitical analysis, signposts might include changes in leadership economic indicators, or shifts in public sentiment. The ability to recognize and interpret these signposts is crucial for anticipating future developments. In practical terms, signposts are used to continuously update and refine hypotheses. As new information becomes available, analysts compare it against the established signposts to determine if a significant change is occurring. This proactive approach allows analysts to adjust their assessments and stay ahead of potential developments. In our previous example, where an intelligence analyst is tasked with assessing the likelihood of a foreign government undergoing a significant political shift, signposts of change analysis might include monitoring for an increase in military presence, changing in key political figures, or shifts in media or online rhetoric. The cross-impact matrix is a tool used to assess how different events or variables might influence each other. The technique involves mapping out potential events or variables in a matrix format, and then analyzing the impact that one event might have on another. This approach helps analysts anticipate the cascading effects of changes in one area on others, providing a more comprehensive view of possible future scenarios. Let's break down the structure of the cross-impact matrix. First, we have a list of variables or events. Begin by identifying the key variables or events that could influence the scenario you're analyzing. These are placed on both the horizontal and vertical axes of the matrix. Second is the matrix setup. Each cell of the matrix represents the potential impact of one variable or event on another. Three, impact scoring. For each cell, assign a score or qualitative assessment indicating the degree of impact. This could be positive, negative, or neutral. You can also consider the likelihood of each impact occurring. Fourth, analysis. The completed matrix allows analysts to see which variables are most influential and how changes in one area might propagate through the system. To understand how the cross-impact matrix is applied, consider a scenario in geopolitical analysis where we need to assess the potential outcomes of a political crisis. Key variables might include economic sanctions, public protests, military responses, and international diplomacy. By analyzing how these factors influence each other, the cross- impact matrix helps to identify potential tipping points, high-impact variables, and likely outcomes. The cross-impact matrix offers several key benefits. First, revealing hidden interdependencies. This technique helps to uncover complex interrelationships between variables that might not be immediately apparent. Second, enhancing strategic planning. By understanding how different factors influence each other, decision-makers can better anticipate the consequences of their actions and plan accordingly. Third, mitigating risks. The matrix allows analysts to identify potential cascading effects of risks, helping to develop strategies that mitigate negative outcomes. Creating a cross-impact matrix involves the following steps. First, identify key variables. List all the critical factors that could affect the scenario. Second, construct the matrix. Set up the matrix with the variables listed on both axes. Third, assess impact. Evaluate the impact of each variable on the others, assigning a qualitative or quantitative score to each cell. Four, analyze the matrix. Review the matrix to identify which variables are most influential and how changes in one might impact others. Five, draw your conclusions. Use the insights gained from the matrix to inform strategic decisions, identify potential risk, and anticipate outcomes. Let's walk through a brief case study. Imagine we are analyzing the potential impact of climate change policies on global trade. Key variables might include environmental regulations, economic growth, energy prices, and technological innovation. By using a cross-impact matrix, we can assess how stricter environmental regulations might influence energy prices, which in turn could affect technological innovations and global trade patterns. While the cross-impact matrix is a powerful tool, it's important to be aware of its limitations. First, subjectivity. The assessment of impacts is often subjective and depends on the expertise of the analyst. Second, complexity. For scenarios with a large number of variables, the matrix can become complex and different to manage. Finally, static analysis. The matrix provides a snapshot in time and may not fully capture dynamic changes over time. For a deeper dive on these and other structured analytic techniques, please pause the video and do the reading assignment Structured Analytic Techniques for Improving Intelligence Analysis. Welcome to our bonus section on structured analytic techniques. Threat casting is a conceptual framework used to help multidisciplinary groups envision future scenarios. It is also a process that enables systemic planning against threats 10 years into the future. Utilizing the threat casting process, groups explore possible future threats and how to transform the future they desire into reality while avoiding undesirable futures. Threat casting is a continuous multi-step process with inputs from social science, technical research, cultural history, economics, trends, expert interviews, and even science fiction storytelling. These inputs inform the exploration of potential visions of the future. Once inputs are explored for impact and application, participants create a science fiction story known as science fiction prototyping based 10 years in the future to add context around human activity. Science fiction prototyping consists of a future story about a person in a place doing a thing. The threat casting process results in creation of many potential future scenarios. Some futures are desirable while others are not. Identifying both types of futures, desirable and undesirable, will help the participant recognize which future to aim towards and which to avoid. Utilizing the scenarios, participants plot actions necessary in the present and at various intervals working towards the 10-year future scenario. These actions will help participants understand how to empower or disrupt the target future scenario. Flags or warning events are also determined in order to map societal indicators onto the recommended path toward the targeted future. When identified flags appear in society, threat casting participants map these back to the original forecast to see whether or not they are on track towards the target future scenario. As you can see, it incorporates the signposts of change analytic technique, but in this case is using it in a predictive capacity. Threat casting was developed by futurist Brian David Johnson at Arizona State University, who has led several threat casting workshops at MasterCard. Early adopters of threat casting include the United States Air Force Academy, the Government of California, and the Army Cyber Institute at West Point Military Academy. For an in-depth view of threat casting at MasterCard, please pause this video and complete the linked reading assignments. Welcome back to the course. Our next section is common models and analytic frameworks. To understand the nature of cyber threats, we must take a closer look at the threat actors who perpetrate them, who they are, how they operate, and why they do what they do. Like the intelligence cycle, there are several threat models that illustrate the methodology employed by cyber adversaries. And despite their subtle differences, the similarities are fairly consistent enough to demonstrate the process. Although the motives of the cyber adversaries vary greatly, the general process for gaining access are typically quite consistent from one method and motive to another. That is because whether it is a criminal in the real world or a cyber criminal in the digital world, the act of casing a target to better understand the vulnerabilities in order to gain access is a consistent need for the attacker, even if the process varies slightly. These models and frameworks allow us to translate raw data into a group's behavior, otherwise known as tactics, techniques, and procedures, or TTPs. The Kill Chain Framework created by U.S. defense contractor Lockheed Martin represents the first in the CTI framework triumvirate and is one of the original foundations of cyber threat intelligence analysis frameworks. The model identifies what the adversaries must complete in order to achieve their objective. I'll go through the phases now. Reconnaissance. This is the attacker getting the lay of the land and analyzing their target. This can include scanning to enumerate the target's network, harvesting email addresses of employees from open source and social media, gathering any open source data that will tell the attacker about what the company is, who works there, what they do, and how the network is set up. Basically, it's internet stalking the target. Weaponization. This is where the attacker crafts their weapon. They will use malicious code paired with a backdoor to create a payload that once delivered will gain them entry into the target's network. Delivery. This is when the attacker launches their weapon. They deliver it to the target's network via an email, a webpage, a USB device, any download of some sort. Exploitation. Once delivered, the malicious code is triggered upon an action. A click of something, a running of a program. This in turn exploits a vulnerability and allows the attacker to gain access to the network. Installation. Once the network has been exploited, the attacker is able to gain access and malware is installed on the asset. Command and control. Once the malware is installed, the attacker sets up a command channel so they can execute remote requests to manipulate the victim asset. Action on objectives. With the channel set up, the attacker can now execute commands to accomplish their mission objectives. Let's take a brief look at a case study demonstrating the effectiveness of the cyber kill chain framework in a real world scenario. The case involves a sophisticated cyber attack against a large organization. The attackers use spear phishing emails to gain initial access, followed by privileged escalation and lateral movement within the network. Had the attack reached its conclusion, a cyber attack would have occurred. Lockheed Martin's security team detected the attack early in the cyber kill chain, specifically during the delivery phase. Using the intelligence-driven approach, they were able to analyze the attack, identify the techniques used and trace the attack back to its origin. The team employed various defense mechanisms to disrupt the attack at multiple stages, preventing the attackers from achieving their goal. The case study highlights the importance of integrating threat intelligence with defense operations to anticipate and counter advanced persistent threats or APTs effectively. By leveraging the cyber kill chain and an intelligence-driven defense strategy, the organization successfully mitigated the attack, demonstrating the framework's practical application and value in defending against complex cyber threats. For your reading assignment, please pause the video and check out the different definitions of indicators, since this is a hot topic of debate in our community, as well as the courses of action, which are often overlooked. The intrusion attempt examples in this paper are also helpful to new analysts who have never worked in intrusion themselves. Welcome back to the course. Our next framework is called the Diamond Model. The Diamond Model is a simple but powerful model every analyst should understand, and it's the second of the big three CTI frameworks. It serves as an excellent template for analysts hunting and mapping adversary behavior. The main objectives of the Diamond Model are to identify specific attackers, understand the tactics, techniques, and procedures they use, otherwise known as the TTPs, and more effectively respond to cyber incidents as they occur. Just as there are four points in a diamond, the Diamond Model has four key components, adversaries, infrastructure, capabilities, and targets. These components also have various links or relationships, such as adversary victim, adversary infrastructure, and adversary capability. The Diamond Model is particularly skillful at visualizing and understanding complex attack scenarios. By modeling the relationships between adversaries, victims, infrastructure, and capabilities, the Diamond Model helps cyber analysts see how the different elements of a cyber attack interact with and influence each other. The Diamond Model condenses large amounts of data into a simple diagram, making exploring different links and patterns easier. Each element of the Diamond Model possesses different attributes that include valuable additional information. For example, key attributes of the adversary element include the adversary's identity, name, or pseudonym, the adversary's motivations and objectives, for example, financial gain, corporate espionage, disruption, the adversary's technical capabilities, skills, and knowledge, the adversary's TTPs, tactics, techniques, and procedures, the adversary's attribution indicators, meaning pieces of evidence that link the adversary to a particular group, such as code similarities or similar tactics. Key attributes of the infrastructure element include the geographic locations, IP addresses, and domains of the servers in the adversary's command and control infrastructure, the communication protocols used, for example, HTTP, DNS, domain registration details, the registration date and name of the registering party of any domains owned or used by the adversary, the websites or servers hosting malware or phishing scams, abnormal traffic patterns indicating communication with the adversary's command and control systems. Please pause the course video to complete your next reading assignment, the Diamond Model of Intrusion Analysis. Welcome back. MITRE ATT&CK is the third and final framework that we will be covering in this course. MITRE ATT&CK is a curated knowledge base and model for cyber adversary behavior, reflecting the various phases of an adversary's attack lifecycle and the platforms they are known to target. ATT&CK focuses on how external adversaries compromise and operate within computer information networks. The ATT&CK framework is a Rosetta Stone of sorts to give different teams within a corporate security environment a common language to talk about adversary activity. For instance, threat intelligence analysts often talk about an actor's motivation and their behavior, but network defenders want to know what specifically the intruder was doing in their network and what applications they were using to do it. The ATT&CK matrix helps bridge this communications gap between teams. It allows for a threat intelligence analyst to convert behaviors into technical steps along the attack cycle, and for network defenders to use this map to gauge how well their existing defenses can block the activity. MITRE's threat-based approach to network compromise is guided by five principles. First, include post-compromise detection. Over time, previously effective perimeter and preventative defenses may fail to keep persistent threats out of the network. Post-compromise detection capabilities are necessary for when a threat bypasses established defenses or uses new means to enter a network. Two, focus on behavior. Signatures and indicators are useful with prior knowledge of adversary infrastructure and artifacts, but defensive tools that rely on known signatures may become unreliable when signatures become stale in relation to a changing threat. Three, use a threat-based model. An accurate and well-scoped threat model is necessary to ensure that detection activities are effective against realistic and relevant adversary behaviors. Four, iterate by design. The adversarial tool and technique landscape is constantly evolving. A successful approach to security requires constant iterative and consistent iterative evolution and refinement of security models, techniques, and tools to account for changing adversary behavior and to understand how networks are compromised. Five, develop and test in a realistic environment. Analytic development and refinement should be performed in a production network environment. Behavior generated by real network users should be present to account for the expected level of sensor noise generated by standard network use. Whenever possible, detection capabilities should be tested by emulation of adversary behavior within that environment. Please pause the course video to watch two helpful presentations on how cyber threat intelligence analysts can use the ATT&CK framework in their everyday work. Welcome back to the course. No course on cyber threat intelligence analysis would be complete without a slide on the pyramid of pain. So here you are. Indicators of compromise, or IOCs, are an important part of the search for the truth or the analytic process. However, IOCs on their own are not enough. They are, in effect, just raw data, and data is not intelligence. The analytic process uses IOCs as a starting point and then using the previously mentioned frameworks and techniques, we can form a hypothesis, gather evidence, and answer our intelligence questions. Cyber attacks, like other forms of conflict, exist between one human being or group and another. By recognizing that fact, we know that long before the technical parameters of an exploit attempt are even planned, the motive has been identified by the adversary. Therefore, it is critical to also examine the adversary's motives, inherent to the specific business context. Once motives are identified, analysts can begin to more closely examine specific and likely adversaries in order to characterize the threat, to identify their preferred threat vectors and TTPs. When conducting the assessment of the adversary, it's important to closely consider the human aspect as no amount of information is out of bounds. Do not only search for technical indicators, but play close attention to human-based indicators such as preferred order of operations, grammatical errors, current geopolitical unrest, cultural and social norms, and beyond, such as how that adversary views your industry, your company, your brand, recent business decisions that have been reported in the news. This is why it's crucial to include various business and network managers as your stakeholders in the requirements and planning process. Cyber attribution is the process of tracking, identifying, and laying blame on the perpetrator of a cyber attack or other hacking exploit. Attribution can help to gather intelligence about the motivations, capabilities, and tactics of cyber attackers, which can be used to better protect against future attacks. However, attribution in cyber intelligence is difficult and often inconclusive because attackers can use various methods to hide their identities, such as proxy servers, VPNs, and other methods of masking their IP addresses and real identities. You've probably noticed that each intelligence vendor has their own naming conventions for threat actor groups. This is because data sets matter. No one can see everything, and each vendor is limited by their finite data set. While their data sets might overlap with those of other intelligence vendors, and hence reporting on Meandian's APT29 is going to be similar to that of CrowdStrike's CozyBear, they can only report on what they are seeing. While we, the consumer, would love for the major vendors to get together, compare notes, and publish a giant report, that's not really a good business move for them. What this means for the cyber threat analysts using these vendors is that you're rarely going to have a smoking gun. The most you can say is that you saw something on your network that overlaps with a data set that a vendor has. So, should we leave attribution efforts to law enforcement? Maybe. From a private sector perspective, having your intel analysts spend cycles tracking down the bad guys and obsessing over the differences between APT29 and CozyBear probably isn't the best use of their time. You're not going to be making any arrests. However, smaller-scale attribution can be hugely helpful in focusing your network defenders' resources. If you know that a certain incident tracks back to a certain identified cluster of activity or threat actor group, then you can focus your analytic efforts on learning more about that group's motivations and TTPs to better defend against a future campaign against your organization. For your last module assignment, please pause your course video to read the following article and watch some videos for a deeper dive into the art and science of attribution. This concludes the analysis module. We covered three main structured analytic techniques, analysis of competing hypotheses, signposts of change, and the cross-impact matrix. We also covered a bonus analytic technique, threat casting. Then we examined the three main analytic frameworks, the Lockheed Martin kill chain, the diamond model, and MITRE ATT&CK. From there, we moved on to defining the different types of indicators and observables. We finished up with a brief look at cyber attribution. You should now have an understanding of how to conduct analysis with the purpose of refining raw information, observables and indicators, into a well thought out intelligence hypotheses and supporting evidence. In our next module, we will explore the production of threat intelligence. This includes developing appropriate products and service lines for target stakeholders and intelligence consumers. Usually, this is in the form of finished intelligence reports or intelligence briefings. If you have any questions, please feel free to reach out to me at my email, sharonflattergraphatmastercard.com. Thank you so much, and we'll see you in the next module.

Use Quizgecko on...
Browser
Browser