Information Assurance and Security 1 Prelim Lesson 2 PDF

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Summary

This document provides a lecture on Information Assurance and Security 1, specifically covering an overview of cyber security and its various aspects. It details the differences between security and safety, the challenges of cybersecurity, and the evolving nature of digital attacks in the modern world.

Full Transcript

Information Assurance and Security 1 Prelim Lesson 2 Prof. John C. Valdoria, MIT College of Industrial Technology Overview of Cybersecurity The world has changed, and quickly. Cybersecurity has become a general concern for all: citizens, professionals, politicians, and, more gener...

Information Assurance and Security 1 Prelim Lesson 2 Prof. John C. Valdoria, MIT College of Industrial Technology Overview of Cybersecurity The world has changed, and quickly. Cybersecurity has become a general concern for all: citizens, professionals, politicians, and, more generally, all decision makers. It has also become a serious concern for our societies that must protect us against cybersecurity attacks with both preventive and reactive measures, which implies a lot of monitoring, and must simultaneously preserve our freedom and avoid general surveillance. Cyberattacks may be conducted by criminals, but also by states for industrial espionage, for economic damage to apply pressure, or to inflict real damage to infrastructure as an act of war. States and their interconnected critical infrastructures are vulnerable. Cyberattacks also put companies—of all sizes—at high risk. The economic damage caused by successful cyberattacks may be considerable. However, our protection level is still considered largely insufficient compared to the risks and potential damages. While our awareness is improving and protective measures are increasing, they still do so at a slow pace. This is partly due to a lack of incentive: cybersecurity is an investment whose benefits are often hard to grasp, as it only pays off when an attack that could have otherwise succeeded fails, and this is difficult to measure. This slow progress is also due to a lack of expertise—at all levels. Computer security, also known as Cybersecurity or IT security, is the protection of computer systems from the damage to their hardware, software or information, as well as from disruption or misdirection of the services they provide. Security in general includes both cybersecurity and physical security. However, cybersecurity requires some form of physical security, since physical access to computer systems enables a whole class of attacks. Conversely, physical security may depend on cybersecurity to the extent that it uses computer systems, e.g., to monitor some physical space or maintain a database of authorized persons. Still, the difference between cyber- and physical security should always be clear, and we only address cybersecurity hereafter. Moreover, in many places, we will just use the word security to mean cybersecurity. Physical Security vs Cybersecurity Physical security and cybersecurity are quite different in nature. Digital information is immaterial: duplicating and exchanging data and code with anyone anywhere in the world is nowadays a trivial, extremely fast process, with almost zero cost. Hence, an attack or malware launched by a single person can spread worldwide, at a large-scale, in less than an hour. Digital information is of discrete nature: a single bit flip may introduce a critical failure and turn a perfectly working system into a malfunctioning one, which is then more vulnerable to compromise. This contrasts with the laws of physics, which tend to be continuous at a macroscopic level, and usually let one observe a slow deformation of a structure before it reaches its breaking point. Digital information ignores borders, and may even play with contradictions between the legislations of different countries or their maladaptation to the digital age. This makes cybersecurity much harder to achieve than other forms of security. Software safety is concerned with the absence of misbehavior, both in normal and exceptional situations, but still in a neutral environment when no one is trying to intentionally attack the system. Software safety is not just a matter of chasing bugs: it also calls for an analysis of the possible sources of misbehavior and how to handle them in a fail-safe manner. This requires a specification of the software’s expected behavior, including a model of the environment, and some justification as to how or why the software respects its specification. Software security aims for the absence of misbehavior in an adversarial environment, where an attacker intentionally tries to misuse a system, putting it in an erroneous state that is not part of its intended specification. Security can also be approached by modeling the environment, but this is much harder to achieve exhaustively, because attackers do not comply with predefined rules, but rather continuously search for previously unknown means of attack. Hence, security also requires us to keep up-to-date with attackers’ progress in all areas (software breaches, algorithms and techniques, hardware capabilities, etc.). A complementary approach consists in describing normal execution paths and monitoring execution, so as to raise an alarm and react appropriately when some trajectory goes outside of normal executions. The terms security and safety are sometimes misused. Safety refers to accidental threats, due to internal misbehaviors or non-intentional misuse of the system, while security refers to intentional threats. Safety deals with fault-tolerance, while security deals with resistance to attacks. For example, a car may crash because of a software specification or an implementation bug (safety issues), or because of an attacker taking remote control of the vehicle (a security issue). These examples highlight several key aspects of security: Security is an essential cornerstone in a digital world which increasingly pervades every aspect of our daily lives, public and private. Without security, the world collapses. Attacks such as WannaCry have deeply impacted unprepared citizens, private companies, and organizations, threatening their activities. All the domains of our digital world are concerned, including the embedded devices omnipresent in our “smart” homes, and in industrial production controllers (including those for critical infrastructures like power and water supplies). The Mirai botnet example highlights that all electronic devices need to be secure. Even if this is well understood for computers, it is far from obvious for other objects, in particular embedded devices forming the Internet of Things (IoT)—either because they are autonomous, have a small battery, limited processing power, or are badly connected. Moreover, the inability of IoT devices to apply software updates and patches is a real concern. On the other hand, software updates can themselves be subject to attacks. All of these aspects are still the subject of active research. Education is essential to security. The WannaCry attack relied on an operating system exploit that had been fixed in a Windows update two months earlier. This only impacted unprepared end users and system administrators who failed to update their computers in a timely manner, not realizing how important it was. Security is often regarded as complex, mechanically limiting its usage. Usable security, meant to facilitate use of security by end users, is an important and active research domain that is closely related to security education and awareness. The security of a system is always limited by that of its weakest component. Even if the core security components (e.g., the cryptographic primitives) are rarely attacked, the same cannot be said of the software implementations of the cryptographic protocols and services. In the case of WannaCry, the attack relied on an exploit of the Windows SMB protocol (the first weak link), which was sufficient to take full control of the computer, no matter what other operating system protections were in use. A cryptosystem should be secure even if everything about the system, except the key, is public knowledge. This principle should be applied to other systems as well. An open design and well documented system will actually ease security reviews by experts. Attackers are often able to reverse engineer systems, and “security by obscurity” only gives a false sense of security. For instance, the attack on smart lights exploited an undocumented functionality. Large, complex systems cannot be totally validated through human inspection. automatic verification tools are needed to find security protocol flaws as well as implementation flaws. The increasing complexity of each individual component, and the complex composition of components in large interdependent systems, require advanced and automatic security validation tools, which is traditionally a very active research topic. Security and privacy are closely related. The WannaCry ransomware did not try to exfiltrate user’s data, but it could have done so. The attacker had full access to data stored on target computers (e.g., the patient database of a medical center) and could have threatened to disclose this sensitive information. It is therefore essential that security and privacy be considered together at the design stage so that, for instance, malicious intrusions do not put data at risk. Security by design, and more recently privacy by design, have become key principles in security design. Diversity of attackers’ motivations and the difficulty of attribution. Although WannaCry has been classified as ransomware, motivated by the desire to make money, the NotPetya malware that quickly followed it in June 2017 might be a state-sponsored malware that attempted to disguise itself as ransomware in order to muddy attribution and potentially to delay investigations. These examples highlight the diversity of the attackers’ motivations and the difficulty sometimes, the impossibility—of attributing an attack. Detection and mitigation of attacks. The previous examples show that security is hard to achieve. Since zero risk cannot exist, the early detection and mitigation of attacks is as important as the attempt to reduce the risk of successful attacks. More generally, there will probably always be vulnerabilities in our systems, despite increasingly efficient preventive security mechanisms. The vulnerabilities appear at all levels of our information systems: applications, OS, firmware, and even hardware, as illustrated recently by the Meltdown and Spectre attacks. Vulnerabilities are sometimes present for a (very) long time in our systems, and we can only hope that they are not exploited before they are discovered. New vulnerabilities are discovered on a daily basis, and new forms of attacks can appear at any time. It is mandatory that we detect well-known attacks, but also new forms of attacks, if we are to increase the security level of our systems. Security comes at a cost. It is easy to understand that security may be expensive, with additional costs to study, implement, configure, manage, and evolve security tools. But security can also have an operational cost, leading to less efficient systems. For example, mitigating the Spectre or Meltdown attacks may require removing some cache techniques or disabling speculative execution. Such mitigation would entail a significant and possibly unacceptable processor-speed slowdown. Hence, in some cases, one may have to accept a difficult compromise between security and efficiency. Cybersecurity consists in ensuring three basic and essential properties of information, services, and IT infrastructures well known as the CIA triad: Confidentiality, Integrity, and Availability. Thus, securing an information system means preventing an unauthorized entity (user, process, service, machine) from accessing, altering, or rendering inaccessible computer data, computing services, or the computing infrastructure. Notice that other properties, such as authenticity(proof of the origin of information), privacy, or protection against illegal copying could also be listed. However, these additional properties can also be seen as particular cases of these three basic properties. Confidentiality: assurance that information is disclosed only to authorized persons, entities, or processes. Integrity: assurance that the system (configuration files, executable files, etc.) or information are modified only by a voluntary and legitimate action, i.e., that the system or information have not been accidentally or deliberately changed. Availability: assurance that a system or information is accessible in a timely manner to those who need to use it. Authenticity: assurance that a message is from the source it claims to be from. Privacy: ability for individuals to control their personal data and decide what to reveal to whom and under what conditions. Privacy can thus be generally defined as the right of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others. Anonymity: confidentiality of the identity of the user or entity. We note that preventing re-identification through side information is not easy, and that indistinguishability, which ensures that an attacker cannot see the difference among a group of entities, is also an important property linked to privacy. Note also that anonymity aims at hiding who performs some action, whereas full privacy may also require hiding which actions are being performed. Security policy: a set of rules that specify how sensitive and critical resources are protected, i.e., how some or all of the previous properties are guaranteed. Resilience: initially defined as the ability of a system to return to its original state after an attack, resilience is nowadays seen as the capacity of a system to deliver its services continuously, even while under attack (i.e., capacity to tolerate attacks). Security Services Reaching the objectives of cybersecurity requires enforcing physical, organizational, and logical counter-measures. Even if physical measures (such as guarding or controlling accesses to buildings) and organizational measures (such as precisely defining the mission of an external IT service provider) are crucial, cybersecurity experts focus in on logical security, i.e., on hardware and software services and mechanisms to ensure the properties of confidentiality, integrity, and availability. A secure computer system must offer preventive services to hinder any violation of these properties, detection services to identify any successful attempt to violate these properties, and reaction services to deploy new or enhanced counter- measures in case of any successful violation. Indeed, while the goal of cybersecurity is to protect a computer system against attacks, one must also assume that some of the attacks will succeed. Therefore, cybersecurity also deals with intrusion detection and responses to attacks. Prevention first involves precisely defining which entity may access what information and in which way: permissions, prohibitions, or obligations to read or write information are to be defined. This constitutes a so-called security policy. Prevention can even take place before the definition of a policy. Indeed, it is good software engineering to detect early source and binary code vulnerabilities that could be exploited to violate the security properties: this is the security by design principle. Even earlier on, we may also prove that a given property is guaranteed by the software: this is formally proved security. The security policy is concretely enforced through security services. The following services can be offered, depending on the policy and on the context: entity identification and authentication, control of access to information by these entities, control of information flows within the system, detection of attempts to exploit potential vulnerabilities of the system (intrusion detection, virus detection), and responses to these attempts (reaction). An even more ambitious objective that could be pursued would be the ability for a computer system to deliver the intended outcome despite adverse cyber events. In other words, the computer systems would tolerate attacks, a capacity generally called cyber-resilience. From a high-level point of view, an entity (state, company, organization, etc.) could be more concerned with cyber-resilience, which is the final objective to achieve, than with cybersecurity, which is a set of deployed techniques that the end-user need not necessarily see. Cyber-resilience, being the capacity to tolerate attacks, has of course a lot of similarities with fault tolerance, which deals with hazardous hardware failures or software bugs. Even if the hypothesis of safety and security are quite different, since attackers do not follow the rules but rather continuously search for new breaches, the mechanisms proposed to tolerate faults may be adapted to tolerate attacks. Some basic principles of cyber-resilience include replication of data and backups, which have long been well-established in the database community. Besides, replication should be used in the context of a distributed system, to avoid having a single point of failure. While cyber-resilience is of major importance, many techniques to achieve it are reminiscent of other fields (e.g., safety), and will not be detailed in the remainder; others are completely relevant to the security field (e.g., DDoS Mitigation) and will be discussed in the corresponding sections. Questions? References Kremer et al (2019). Cybersecurity: Current Challenges and Inria’s Research Directions. https://hal.inria.fr/hal-01993308/document Pande (2017). Introduction to Cyber Security. http://www.uou.ac.in/sites/default/ files/slm/Introduction-cyber-security.pdf Young (2015). Information Assurance and Security. https://www.cs.utexas.edu/ ~byoung/cs361c/slides1-intro.pdf Hibbard (2009). Introduction to Information Assurance. https://www.snia.org/sites/default /education/tutorials/2009/spring/security/EricHibbard-Introduction-Information-Assurance.pdf

Use Quizgecko on...
Browser
Browser