Chapter-1 Introduction.pdf
Document Details
Uploaded by RelaxedProsperity
2024
Tags
Full Transcript
Chapter 1: Introduction to Computer Security SPRING 2024 | CIT460 COMPUTER AND INFORMATION SECURITY Learning Objectives ▪Describe the critical security requirements of confidentiality, integrity, and availability. ▪Discuss the types of security threats and attacks that must be dealt with and give...
Chapter 1: Introduction to Computer Security SPRING 2024 | CIT460 COMPUTER AND INFORMATION SECURITY Learning Objectives ▪Describe the critical security requirements of confidentiality, integrity, and availability. ▪Discuss the types of security threats and attacks that must be dealt with and give examples of the kinds of threats and attacks that apply to different categories of computer and network assets. ▪Summarize the functional requirements for computer security. ▪Explain the fundamental security design principles. ▪Discuss the use of attack surfaces and attack trees. ▪Understand the principle aspects of a comprehensive security strategy. NIST Definition of Computer Security ▪Computer Security: Measures and controls that ensure confidentiality, integrity, and availability of information system assets, including hardware, software, firmware, and information being processed, stored, and communicated. Computer Security Overview “Computer security deals with computer-related assets subject to various threats and for which various measures are taken to protect those assets.” The Three Key Objectives in the Definition of Computer Security ▪Confidentiality: ▪ Data confidentiality: Assures that private or confidential information is not made available or disclosed to unauthorized individuals. ▪ Privacy: Assures that individuals control or influence what information related to them may be collected and stored and by whom and to whom that information may be disclosed. ▪Integrity: This term covers two related concepts: ▪ Data integrity: Assures that information and programs are changed only in a specified and authorized manner. ▪ System integrity: Assures that a system performs its intended function in an unimpaired manner, free from deliberate or inadvertent unauthorized manipulation of the system. ▪Availability: Assures systems work promptly and service is not denied to authorized users. NIST FIPS 199 CIA Definition ▪Confidentiality: Preserving authorized restrictions on information access and disclosure, including means for protecting personal privacy and proprietary information. A loss of confidentiality is the unauthorized disclosure of information. ▪Integrity: Guarding against improper information modification or destruction, ensuring information nonrepudiation and authenticity. A loss of integrity is the unauthorized modification or destruction of information. ▪Availability: Ensuring timely and reliable access to and use of information. A loss of availability is disrupting access to or use of information or an information system. NIST FIPS 199 CIA Definition ▪Authenticity: The property of being genuine and being able to be verified and trusted; confidence in the validity of a transmission, a message, or message originator. This means verifying that users are who they say they are and that each input arriving at the system came from a trusted source. ▪Accountability: The security goal that generates the requirement for actions of an entity to be traced uniquely to that entity. This supports nonrepudiation, deterrence, fault isolation, intrusion detection and prevention, and after-action recovery and legal action. Because truly secure systems are not yet an achievable goal, we must be able to trace a security breach to a responsible party. Systems must keep records of their activities to permit later forensic analysis to trace security breaches or to aid in transaction disputes. Computer Security Challenges ▪Security is not simple ▪Security mechanisms typically involve more than a particular algorithm or ▪Potential attacks on the security features protocol need to be considered ▪Security is essentially a battle of wits ▪Procedures used to provide particular between a perpetrator and the designer services are often counter-intuitive ▪Little benefit from security investment is ▪It is necessary to decide where to use the perceived until a security failure occurs various security mechanisms ▪Strong security is often viewed as an ▪Requires constant monitoring impediment to efficient and user-friendly ▪Is too often an afterthought operation Computer System Assets ▪Hardware: Computer systems and other data processing, data storage, and data communications devices. ▪Software: Including the operating system, system utilities, and applications. ▪Data: Including files and databases and security-related data, such as password files. ▪Communication facilities and networks: Local and wide area network communication links, bridges, routers, and so on. Computer Security Terminology: The Model for Computer Security ▪Adversary (threat agent): Individual, group, organization, or government that conducts or has the intent to conduct detrimental activities. ▪Attack: Any kind of malicious activity that attempts to collect, disrupt, deny, degrade, or destroy information system resources or the information itself. ▪Countermeasure: A device or technique that has as its objective the impairment of the operational effectiveness of undesirable or adversarial activity or the prevention of espionage, sabotage, theft, or unauthorized access to or use of sensitive information or information systems. ▪Risk: A measure of the extent to which a potential circumstance or event threatens an entity, and typically a function of 1) the adverse impacts that would arise if the circumstance or event occurs; and 2) the likelihood of occurrence. Computer Security Terminology: The Model for Computer Security ▪Security Policy: A set of criteria for providing security services. It defines and constrains the activities of a data processing facility to maintain a condition of security for systems and data. ▪System Resource (Asset): A major application, general support system, high-impact program, physical plant, mission-critical system, personnel, equipment, or a logically related group of systems. ▪Threat: Any circumstance or event with the potential to adversely impact organizational operations (including mission functions, image, or reputation), organizational assets, individuals, other organizations, or the nation through an information system via unauthorized access, destruction, disclosure, modification of information, and/or denial of service. ▪Vulnerability: Weakness in an information system, system security procedures, internal controls, or implementation that could be exploited or triggered by a threat source. The Concern of Vulnerabilities ▪In the context of security, the concern is with the vulnerabilities of system resources. ▪The following general categories of vulnerabilities of a computer system or network asset are listed below: 1. The system can be corrupted, so it does the wrong thing or gives the wrong answers. For example, stored data values may differ from what they should be because they have been improperly modified. 2. The system can become leaky. For example, someone who should not have access to some or all of the information available through the network obtains such access. 3. The system can become unavailable or very slow. That is, using the system or network becomes impossible or impractical. Threats, Attacks, Threat Agents ▪Corresponding to the various types of vulnerabilities to a system resource are threats that are capable of exploiting those vulnerabilities. ▪A threat represents potential security harm to an asset. ▪An attack is a threat carried out (threat action) and, if successful, leads to an undesirable violation of security or threat consequence. ▪The agent carrying out the attack is referred to as an attacker or threat agent. We can distinguish two types of attacks: ▪ Active attack: An attempt to alter system resources or affect their operation. ▪ Passive attack: An attempt to learn or use information from the system that does not affect system resources. Attacks based on the origin of the attack ▪Inside attack: Initiated by an entity inside the security perimeter (an “insider”). The insider is authorized to access system resources but uses them in a way not approved by those who granted the authorization. ▪Outside attack: Initiated from outside the perimeter, by an unauthorized or illegitimate user of the system (an “outsider”). On the Internet, potential outside attackers range from amateur pranksters to organized criminals, international terrorists, and hostile governments. Countermeasures, Detect, Recover ▪A countermeasure is any means taken to deal with a security attack. ▪Ideally, a countermeasure can be devised to prevent a particular type of attack from succeeding. ▪When prevention is not possible or fails in some instances, the goal is to detect the attack and then recover from the effects of the attack. ▪A countermeasure may introduce new vulnerabilities. ▪In any case, residual vulnerabilities may remain after the imposition of countermeasures. ▪Such vulnerabilities may be exploited by threat agents representing a residual level of risk to the assets. ▪Owners will seek to minimize that risk given other constraints Security Concepts and Relationships Threat Consequence Threat Consequence Threat Action (Attack) Unauthorized Exposure: Sensitive data are directly released to an unauthorized entity. Disclosure Interception: An unauthorized entity directly accesses sensitive data traveling A circumstance or between authorized sources and destinations. event where an entity gains access to data Inference: A threat action whereby an unauthorized entity indirectly accesses for which the entity is sensitive data (but not necessarily the data contained in the communication) by not authorized. reasoning from characteristics or by-products of communications. Unauthorized Intrusion: An unauthorized entity gains access to sensitive data by overcoming disclosure is a threat a system’s security protections such as access control protections. to confidentiality. Threat Consequence Threat Consequence Threat Action (Attack) Deception Masquerade: An unauthorized entity gains access to a system or performs a malicious act by posing as an authorized entity. A circumstance or event that may result Falsification: False data deceive an authorized entity. in an authorized entity receiving false data Repudiation: An entity deceives another by falsely denying responsibility for an and believing it to be act. true. Deception is a threat to either system integrity or data integrity. Threat Consequence Threat Consequence Threat Action (Attack) Disruption Incapacitation: Prevents or interrupts system operation by disabling a system component. A circumstance or event that interrupts Corruption: Undesirably alters system operation by adversely modifying system or prevents the correct functions or data. operation of system services and Obstruction: A threat action that interrupts the delivery of system services by functions. hindering system operation. Disruption is a threat to availability or system integrity. Threat Consequence Threat Consequence Threat Action (Attack) Usurpation Misappropriation: An entity assumes unauthorized logical or physical control of a system resource. A circumstance or event that results in Misuse: Causes a system component to perform a function or service that is control of system detrimental to system security. services or functions by an unauthorized entity. Usurpation is a threat to system integrity. Threats and Assets ▪The assets of a computer system can be categorized as hardware, software, data, communication lines, and networks. ▪We will briefly discuss these four categories and relate these to the concepts of integrity, confidentiality, and availability. Scope of Computer Security Computer and Network Assets, with Examples of Threats CONFIDENTIALITY INTEGRITY AVAILABILITY HARDWARE An unencrypted USB drive is Equipment is stolen or stolen. disabled, thus denying service. SOFTWARE An unauthorized copy of A working program is Programs are deleted, software is made. modified, either to cause it to denying access to users. fail during execution or to cause it to do some unintended task. Computer and Network Assets, with Examples of Threats CONFIDENTIALITY INTEGRITY AVAILABILITY DATA An unauthorized read of Existing files are modified, or Files are deleted, denying data is performed. An new files are fabricated. access to users. analysis of statistical data reveals underlying data. Communica Messages are read. The Messages are modified, Messages are destroyed or tion Lines traffic pattern of messages delayed, reordered, or deleted. Communication and is observed. duplicated. False messages lines or networks are Networks are fabricated. rendered unavailable. Security Functional Requirements ▪There are a number of ways of classifying and characterizing the countermeasures that may be used to reduce vulnerabilities and deal with threats to system assets. ▪We will discuss countermeasures in terms of functional requirements and follow the classification defined in FIPS 200 (Minimum Security Requirements for Federal Information and Information Systems). ▪This standard enumerates 17 security-related areas about protecting the confidentiality, integrity, and availability of information systems and the information processed, stored, and transmitted by those systems. Areas of Security Requirements 1. Access Control 10. Media Protection 2. Awareness and Training 11. Physical and Environmental Protection 3. Audit and Accountability 12. Planning 4. Certification, Accreditation, and Security Assessments 13. Personnel Security 5. Configuration Management 14. Risk Assessment 6. Contingency Plan 15.Systems and Services Acquisition 7. Identification and Authentication 8. Incident Response 16. System and Communications Protection 9. Maintenances 17. System and Information Integrity Fundamental Security Design Principles ▪Despite years of research and development, it has not been possible to develop security design and implementation techniques that systematically exclude security flaws and prevent all unauthorized actions. ▪The National Centers of Academic Excellence in Information Assurance/Cyber Defense, which is jointly sponsored by the U.S. National Security Agency and the U. S. Department of Homeland Security, list fundamental security design principles shown in the next slide. Fundamental Security Design Principles 1. Economy of mechanism 9. Isolation 2. Fail-safe defaults 10. Encapsulation 3. Complete mediation 11. Modularity 4. Open design 12. Layering 5. Separation of privilege 13. Least astonishment 6. Least privilege 7. Least common mechanism 8. Psychological acceptability Economy of mechanism ▪Economy of mechanism means the design of security measures embodied in both hardware and software should be as simple and small as possible. ▪The motivation for this principle is that: ▪relatively simple, small design is easier to test and verify thoroughly. ▪With a complex design, there are many more opportunities for an adversary to discover subtle weaknesses to exploit that may be difficult to spot ahead of time. Fail-safe Default ▪Fail-safe default means access decisions should be based on permission rather than exclusion. ▪That is, the default situation is lack of access, and the protection scheme identifies conditions under which access is permitted. ▪This approach exhibits a better failure mode than the alternative approach, where the default is to permit access. Complete mediation ▪Complete mediation means every access must be checked against the access control mechanism. ▪Systems should not rely on access decisions retrieved from a cache. ▪In a system designed to operate continuously, this principle requires that, if access decisions are remembered for future use, careful consideration be given to how changes in authority are propagated into such local memories. Open Design ▪Open design means the design of a security mechanism should be open rather than secret. ▪For example, although encryption keys must be secret, encryption algorithms should be open to public scrutiny. Separation of privilege ▪Separation of privilege is defined in as a practice in which multiple privilege attributes are required to achieve access to a restricted resource. ▪A good example of this is multifactor user authentication, which requires the use of multiple techniques, such as a password and a smartcard, to authorize a user. Least privilege ▪Least privilege means every process and every user of the system should operate using the least set of privileges necessary to perform the task. ▪The system security policy can identify and define the various roles of users or processes. ▪Each role is assigned only those permissions needed to perform its functions. Least Common Mechanism ▪Least common mechanism means the design should minimize the functions shared by different users, providing mutual security. ▪This principle helps reduce the number of unintended communication paths and reduces the amount of hardware and software on which all users depend, thus making it easier to verify if there are any undesirable security implications. Psychological acceptability ▪Psychological acceptability implies the security mechanisms should not interfere unduly with the work of users, and at the same time, meet the needs of those who authorize access. ▪If security mechanisms hinder the usability or accessibility of resources, users may opt to turn off those mechanisms. Isolation ▪Isolation is a principle that applies in three contexts. ▪ First, public access systems should be isolated from critical resources. ▪ Second, the processes and files of individual users should be isolated from one another except where it is explicitly desired. ▪ Third, security mechanisms should be isolated in the sense of preventing access to those mechanisms. ▪ For example, logical access control may provide a means of isolating cryptographic software from other parts of the host system and for protecting cryptographic software from tampering and the keys from replacement or disclosure. Encapsulation ▪Encapsulation can be viewed as a specific form of isolation based on object- oriented functionality. ▪Protection is provided by encapsulating a collection of procedures and data objects in a domain of its own so that the internal structure of a data object is accessible only to the procedures of the protected subsystem and the procedures may be called only at designated domain entry points. Modularity ▪Modularity in the context of security refers both to the development of security functions as separate, protected modules, and to the use of a modular architecture for mechanism design and implementation. ▪The security design should be modular so that individual parts of the security design can be upgraded without the requirement to modify the entire system. Layering ▪Layering refers to the use of multiple, overlapping protection approaches addressing the people, technology, and operational aspects of information systems. ▪By using multiple, overlapping protection approaches, the failure or circumvention of any individual protection approach will not leave the system unprotected. Least astonishment ▪Least astonishment means a program or user interface should always respond in the way that is least likely to astonish the user. ▪For example, the mechanism for authorization should be transparent enough to a user that the user has a good intuitive understanding of how the security goals map to the provided security mechanism. Attack Surfaces ▪An attack surface consists of the reachable and exploitable vulnerabilities in a system. ▪Examples of attack surfaces are the following: 1. Open ports on outward-facing Web and other servers and code listening on those ports. 2. Services available on the inside of a firewall. 3. Code that processes incoming data, e-mail, XML, office documents, and industry- specific custom data exchange formats. 4. Interfaces, SQL, and web forms. 5. An employee with access to sensitive information is vulnerable to a social engineering attack. Attack Surface Categories ▪Network attack surface: This category refers to vulnerabilities over an enterprise, wide area, or Internet network. This category includes network protocol vulnerabilities, such as those used for a denial-of-service attack, disruption of communications links, and various forms of intruder attacks. ▪Software attack surface: This refers to application, utility, or operating system code vulnerabilities. A particular focus in this category is Web server software. ▪Human attack surface: This category refers to vulnerabilities created by personnel or outsiders, such as social engineering, human error, and trusted insiders. Attack Surface Analysis ▪An attack surface analysis is a useful technique for assessing the scale and severity of threats to a system. ▪A systematic analysis of points of vulnerability makes developers and security analysts aware of where security mechanisms are required. ▪Once an attack surface is defined, designers may be able to find ways to make the surface smaller, thus making the task of the adversary more difficult. ▪The attack surface also guides setting priorities for testing, strengthening security measures, or modifying the service or application. Defense in Depth and Attack Surface Attack Trees Mechanism ▪An attack tree is a branching, hierarchical data structure representing a set of potential techniques for exploiting security vulnerabilities. ▪The security incident that is the attack’s goal is represented as the tree’s root node, and the ways by which an attacker could reach that goal are iteratively and incrementally represented as branches and subnodes of the tree. ▪Each subnode defines a subgoal, and each subgoal may have its own set of further subgoals, and so on. ▪The final nodes on the paths outward from the root, the leaf nodes, represent different ways to initiate an attack. Attack Trees Mechanism ▪Each node other than a leaf is either an AND-node or an OR-node. ▪To achieve the goal represented by an AND-node, the subgoals represented by all of that node’s subnodes must be achieved, and for an OR-node, at least one of the subgoals must be achieved. ▪Branches can be labeled with values representing difficulty, cost, or other attack attributes to compare alternative attacks. Attack Trees Motivation ▪The motivation for the use of attack trees is to effectively exploit the information available on attack patterns. ▪Organizations such as CERT publish security advisories that have enabled the development of a body of knowledge about both general attack strategies and specific attack patterns. ▪Security analysts can use the attack tree to document security attacks in a structured form that reveals key vulnerabilities. ▪The attack tree can guide both the design of systems and applications, and the choice and strength of countermeasures Attack Trees Example ▪An example of an attack tree analysis for an Internet banking authentication application. ▪The root of the tree is the objective of the attacker, which is to compromise a user’s account. ▪The shaded boxes on the tree are the leaf nodes, which represent events that comprise the attacks. ▪The white boxes are categories that consist of one or more specific attack events (leaf nodes). ▪Note that in this tree, all the nodes other than leaf nodes are OR-nodes. Attack Trees Example Computer Security Strategies ▪A comprehensive security strategy involves three aspects 1. Specification/policy: What is the security scheme supposed to do? 2. Implementation/mechanisms: How does it do it? 3. Correctness/assurance: Does it really work? Security Policy ▪The first step in devising security services and mechanisms is to develop a security policy. ▪In developing a security policy, a security manager needs to consider the following factors: 1. The value of the assets being protected 2. The vulnerabilities of the system 3. Potential threats and the likelihood of attacks ▪The manager must consider the following trade-offs: 1. Ease of use versus security 2. Cost of security versus the cost of failure and recovery Security Implementation ▪Security implementation involves four complementary courses of action: 1. Prevention: An ideal security scheme is one with no successful attack. Although this is not practical in all cases, there is a wide range of threats in which prevention is a reasonable goal. 2. Detection: In several cases, absolute protection is not feasible, but it is practical to detect security attacks. 3. Response: If security mechanisms detect an ongoing attack, such as a denial-of- service attack, the system may be able to respond in such a way as to halt the attack and prevent further damage. 4. Recovery: An example of recovery is using backup systems, so if data integrity is compromised, a prior, correct copy of the data can be reloaded. Assurance and Evaluation ▪Those who are “consumers” of computer security services and mechanisms (e.g., system managers, vendors, customers, and end users) desire a belief that the security measures in as intended. ▪That is, security consumers want to feel that the security infrastructure of their systems meets security requirements and enforces security policies. ▪These considerations bring us to the concepts of assurance and evaluation. Assurance ▪Assurance is an attribute of an information system that provides grounds for having confidence that the system operates such that the system’s security policy is enforced. ▪This encompasses both system design and system implementation. ▪Thus, assurance deals with the questions, “Does the security system design meets its requirements?” and ▪“Does the security system implementation meet its specifications?” ▪Assurance is expressed as a degree of confidence, not in terms of formal proof that a design or implementation is correct. Evaluation ▪Evaluation is the process of examining a computer product or system with respect to certain criteria. ▪Evaluation involves testing and may also involve formal analytic or mathematical techniques. ▪The central thrust of work in this area is the development of evaluation criteria that can be applied to any security system (encompassing security services and mechanisms) and that is broadly supported for making product comparisons. Computer Security Standards ▪NIST (National Institute of Standards and Technology) ▪ISOC (Internet Society) ▪ITU-T (International Telecommunication Union) ▪ISO (International Organization for Standardization) Questions?