Security Engineering Lecture (8) PDF
Document Details
Uploaded by StylishSpessartine
جامعة العلوم والتقانة
Dania Mohamed Ahmed
Tags
Summary
This presentation discusses security engineering concepts, such as confidentiality, integrity, and availability. It explores levels of security, including infrastructure, application, and operational security. The lecture also delves into security and dependability attributes and security requirements.
Full Transcript
Security Engineering Lecture (8) Dania Mohamed Ahmed Introduction The rise of the Internet in the 1990s posed new security challenges for software engineers. With more systems online, the risk of various attacks increased, making it crucial to design secure systems. E...
Security Engineering Lecture (8) Dania Mohamed Ahmed Introduction The rise of the Internet in the 1990s posed new security challenges for software engineers. With more systems online, the risk of various attacks increased, making it crucial to design secure systems. Engineers must now account for both malicious attacks and accidental errors during development. To create secure systems, three key dimensions must be considered: 1. Confidentiality: Protecting sensitive information from unauthorized access, such as preventing credit card data theft. 2. Integrity: Ensuring information remains accurate and reliable, as in the case of a worm that corrupts data. 3. Availability: Maintaining access to systems and data; for example, a denial-of-service attack can make a system unavailable. These dimensions are interconnected. If a system's availability is compromised, it may hinder updates, affecting integrity. Conversely, a breach in integrity can necessitate system downtime, impacting availability. Addressing these issues is essential for building dependable systems. Levels of security From an organizational viewpoint, security should be addressed at three levels: 1. Infrastructure Security: Focuses on protecting the systems and networks that provide essential services to the organization. 2. Application Security: Ensures the security of individual applications or groups of related systems. 3. Operational Security: Involves securing the day-to-day operations and usage of the organization's systems. Each level is crucial for maintaining overall security within the organization. Levels of security Cont.. The next figure illustrates how an application system relies on a supporting infrastructure made up of various software and hardware layers. The software infrastructure can include: Operating systems (like Linux or Windows) Generic applications (such as web browsers and email clients) Database management systems Middleware (for distributed computing and database access) Libraries of reusable components for application software Network systems are controlled by software and can be vulnerable to security threats, like attackers intercepting network packets. However, most security attacks target the software infrastructure because components like web browsers are widely used and can be easily probed for vulnerabilities. Levels of security Cont.. Levels of security Cont.. Infrastructure Security is mainly a management issue, focusing on configuring systems to resist attacks. Key activities include: 1. User and Permission Management: Managing user access and ensuring proper authentication and permissions. 2. System Software Deployment and Maintenance: Installing and configuring software securely, as well as regularly updating it to fix vulnerabilities. 3. Attack Monitoring, Detection, and Recovery: Monitoring for unauthorized access, detecting attacks, and ensuring backups are in place to restore normal operations after an attack. Operational Security centers on human behavior, ensuring users don’t compromise security (like leaving systems unattended). Users may sometimes act insecurely to complete their tasks efficiently, making it important to raise awareness about security issues and find a balance between security and usability. Security and dependability Security is a crucial attribute of a system, representing its ability to protect itself from both internal and external threats. These threats arise mainly because many computers and mobile devices are networked and accessible from outside. Common attack methods include installing viruses or Trojan horses, unauthorized use of system services, and altering system data. To maximize security, one option would be to keep a system completely offline, limiting threats to those posed by authorized users and the control of devices like USB drives. However, most systems rely on network access for significant benefits, making this approach impractical. For certain systems, especially military, e-commerce, and those handling confidential information, security is the top priority. While downtime in a system like an airline reservation might cause inconvenience, a security breach could allow an attacker to delete all bookings, jeopardizing normal operations entirely. Thus, robust security measures are essential for maintaining system dependability. Security and dependability System vulnerabilities can emerge from various sources, including: Requirements, Design, or Implementation Issues: Flaws in how the system is planned or built can create weaknesses. Human, Social, or Organizational Factors: People may use easy-to-guess passwords, write them down in accessible places, or fail to install protection software. These problems aren't just human errors; they often highlight poor system design. For instance, if a system requires frequent password changes, users might resort to writing down their passwords. Similarly, complex configuration processes can lead to mistakes by system administrators. Addressing these vulnerabilities requires not only improving user practices but also refining system design to enhance security. Security terminology Term Definition Asset Anything valuable that needs protection, such as the software system or its data. Attack An attempt to exploit a vulnerability in the system with the intent to damage its assets. Attacks can be external (from outside) or internal (from authorized users). Control A protective measure that reduces vulnerabilities in a system. For example, encryption helps secure a weak access control system. Exposure The potential loss or harm to a system, which could include data loss, damage, or the time and effort required for recovery after a breach. Threat Any circumstance that could potentially cause loss or harm, representing a vulnerability that could be exploited by an attack. Vulnerability A weakness in a system that can be exploited to cause damage or loss. A security story for the Mentcare system Unauthorized access to the Mentcare system Clinic staff log on to the Mentcare system using a username and password. The system requires passwords to be at least eight letters long but allows any password to be set without further checking. A criminal finds out that a well-paid sports star is receiving treatment for mental health problems. He would like to gain illegal access to information in this system so that he can blackmail the star. By posing as a concerned relative and talking with the nurses in the mental health clinic, he discovers how to access the system and personal information about the nurses and their families. By checking name badges, he discovers the names of some of the people allowed access. He then attempts to log on to the system by using these names and systematically guessing possible passwords, such as the names of the nurses’ children Examples of security terminology Term Example Asset The record of each patient who is receiving or has received treatment. Attack An impersonation of an authorized user. Control A password checking system that disallows user passwords that are proper names or words that are normally included in a dictionary. Exposure Potential financial loss from future patients who do not seek treatment because they do not trust the clinic to maintain their data. Financial loss from legal action by the sports star. Loss of reputation. Threat An unauthorized user will gain access to the system by guessing the credentials (login name and password) of an authorized user. Vulnerability Authentication is based on a password system that does not require strong passwords. Users can then set easily guessable passwords Types of security threats There are four main types of security threats that can impact a system: 1. Interception Threats: These threats allow an attacker to access valuable assets. For example, in the Mentcare system, an attacker might gain access to a patient's records. 2. Interruption Threats: These threats make parts of the system unavailable. A common example is a denial-of-service attack targeting a system’s database server. 3. Modification Threats: These threats enable an attacker to alter or destroy system assets. In the Mentcare system, this could involve an attacker changing or deleting a patient record. 4. Fabrication Threats: These threats allow an attacker to insert false information into a system. While this might not be a significant threat in the Mentcare system, it could be critical in banking systems where false transactions could redirect funds to the attacker’s account. Understanding these threat types is essential for developing robust security measures to protect against them. How to enhance system security To enhance system security, you can implement controls based on three key concepts: avoidance, detection, and recovery. Vulnerability Avoidance: These controls aim to prevent attacks from being successful. This can involve designing the system to eliminate security risks. For instance, sensitive military systems are often kept off the Internet to reduce external access. Additionally, using encryption protects data; even if unauthorized access occurs, attackers cannot read the encrypted information, making it costly and time-consuming to crack. Attack Detection and Neutralization: These controls are designed to detect and respond to attacks. They include monitoring the system for unusual activity patterns. If an attack is detected, actions can be taken, such as shutting down affected parts of the system or restricting access to certain users to mitigate the threat. Exposure Limitation and Recovery: These controls facilitate recovery from incidents. They can include automated backup strategies, data mirroring, and even insurance policies that cover costs associated with a successful attack. These measures help ensure that the organization can bounce back quickly after a security breach. Implementing these controls collectively strengthens overall system security and resilience. Security and dependability attributes Security is closely linked to key dependability attributes: 1. Security and Reliability: Attacks can corrupt data, leading to system failures. Development errors, like improper input validation, can create vulnerabilities that attackers exploit. 2. Security and Availability: Denial-of-service attacks can overwhelm web servers, making them unavailable. Attackers may also threaten these attacks for ransom. 3. Security and Safety: Safety checks assume that the executing code matches the source code. If an attacker alters the code, it can lead to safety failures. Both security and safety focus on preventing harmful events. 4. Security and Resilience: Resilience is a system's ability to recover from damaging events, particularly cyberattacks. Strategies should aim to deter, detect, and recover from these threats. For software systems to be reliable, available, and safe, security must be considered at every stage of development, from requirements to operation. It is essential, not optional, for system dependability. Security and organization Building secure systems is challenging due to high costs and uncertainties. Unlike safety regulations that mandate compliance regardless of cost, security failures usually don’t result in legal consequences unless personal data is breached. As a result, organizations may choose to accept certain risks rather than invest heavily in security measures. For example, in the credit card industry, compensating users for fraud can be more cost- effective than deploying extensive fraud prevention technologies. Effective security risk management is framed as a business issue rather than a purely technical one, requiring a clear information security policy that outlines: 1. Protected Assets: Not all assets require stringent security; public information can often be less protected. 2. Protection Levels: Different assets necessitate varying levels of security, depending on their sensitivity and the potential consequences of their loss. 3. User Responsibilities: Policies should clarify user expectations, such as password usage, and what the organization provides in terms of security support. 4. Existing Procedures: Organizations may need to maintain existing security measures, even if they are outdated, due to practicality. Security risk assessment Security risk assessment and management are crucial organizational processes aimed at identifying and understanding risks to information assets, such as systems and data. While ideally each asset should undergo an individual risk assessment, this may not be practical for organizations with many existing systems, prompting the use of generic assessments. However, new systems should receive individual assessments. Risk assessment is primarily an organizational rather than a technical activity, as some threats exploit broader organizational weaknesses rather than just technological vulnerabilities. For example, confirming an engineer's visit with suppliers can deter unauthorized access more effectively than relying solely on technology. The stages of risk assessment The risk assessment process should be continuous throughout the development life cycle, encompassing several stages: 1. Preliminary Risk Assessment: This initial assessment identifies generic risks and evaluates whether sufficient security can be achieved cost-effectively, without detailed system specifications or known vulnerabilities. 2. Design Risk Assessment: Conducted during system development, this assessment informs security requirements based on design and implementation decisions, identifying known and potential vulnerabilities that may affect system functionality. 3. Operational Risk Assessment: Focused on the system's usage, this assessment addresses risks that arise in operational environments, such as unattended computers. It should continue post-installation to adapt to changes in system use and organizational structure, leading to the need for new security requirements as the system evolves. Security requirements The specification of security requirements is similar to that of safety requirements, as both often involve "shall not" conditions that define unacceptable behaviors. However, security presents greater challenges for several reasons: 1. Hostile Environment: Security must account for deliberate attacks by knowledgeable individuals, while safety can assume a non-hostile context. 2. Root Cause Analysis: Identifying failures is usually clear in safety scenarios, but security breaches can involve concealed methods, complicating this process. 3. System Downtime: Shutting down a system for safety is generally acceptable, but doing so in response to a security attack signals the attacker's success. 4. Nature of Events: Safety incidents are typically accidental, whereas security attacks are strategic, allowing attackers to adapt their approaches. Consequently, security requirements are generally more comprehensive than safety requirements, focusing on a wider array of threats and considering various types of potential attacks. Types of security requirements Firesmith (2003) identified ten types of security requirements that can be included in a system specification: 1. Identification Requirements: Specify whether the system should identify users before interaction. 2. Authentication Requirements: Define how users are identified. 3. Authorization Requirements: Outline the privileges and access permissions for identified users. 4. Immunity Requirements: Detail how the system protects itself against viruses and similar threats. 5. Integrity Requirements: Address measures to prevent data corruption. 6. Intrusion Detection Requirements: Specify mechanisms for detecting attacks on the system. Types of security requirements 7. Nonrepudiation Requirements: Ensure that parties cannot deny their involvement in a transaction. 8. Privacy Requirements: Define how data privacy is maintained. 9. Security Auditing Requirements: Specify how system use can be audited. 10. System Maintenance Security Requirements: Outline how the system can prevent authorized changes from undermining its security. Not all types of security requirements will apply to every system; their relevance depends on the specific system, its use case, and the expected users. Security requirements Preliminary risk assessment and analysis focus on identifying generic security risks for a system and its data, serving as a crucial input for the security requirements engineering process. Security requirements can be proposed to support general risk management strategies, which include: 1. Risk Avoidance Requirements: Define how to design the system to eliminate specific risks entirely. 2. Risk Detection Requirements: Specify mechanisms to identify and neutralize risks before losses occur. 3. Risk Mitigation Requirements: Outline how the system should be designed to recover from losses and restore system assets. These strategies collectively enhance the system's resilience against potential security threats. A risk-driven security requirements process 1. Asset Identification: Identify system assets needing protection, including the system itself and associated data. 2. Asset Value Assessment: Estimate the value of these identified assets. 3. Exposure Assessment: Evaluate potential losses for each asset, considering direct losses, recovery costs, and reputational damage. 4. Threat Identification: Identify threats to the system's assets. 5. Attack Assessment: Decompose each threat into potential attacks and analyze how these attacks might occur, using tools like attack trees. 6. Control Identification: Propose protective controls, such as encryption, to safeguard assets. 7. Feasibility Assessment: Assess the technical feasibility and costs of the proposed controls, ensuring they are appropriate for the asset's value. 8. Security Requirements Definition: Use insights from the previous stages to derive specific security requirements for the system infrastructure or application. This structured approach helps ensure that security measures are aligned with the identified risks and asset values. The preliminary risk assessment process for security requirements Example: Mentcare patient management system The Mentcare patient management system is a security-critical application, and its risk analysis is documented through asset analysis and threat identification. Once a preliminary risk assessment is complete, requirements can be proposed to avoid, detect, and mitigate risks, but this process requires input from both engineers and domain experts. Some examples of security requirements for the Mentcare system, along with their associated risks, include: 1. Patient information must be downloaded to a secure area at the start of a clinic session. Risk: Denial-of-service attacks; maintaining local copies allows continued access. 2. All patient information on the system client must be encrypted. Risk: External access to patient records; encryption protects data as attackers would need the encryption key to access it. Example: Mentcare patient management system 3. Patient information must be uploaded to the database after a clinic session and deleted from the client’s computer. Risk: External access through a stolen laptop. 4. A log of all changes to the system database and their initiators must be maintained on a separate computer from the database server. Risk: Insider or external attacks that corrupt data; logs help recreate records from backups. These requirements are designed to address specific risks and enhance the system’s overall security. The first two requirements emphasize downloading patient information to a local machine for continued consultations in case of server issues. However, this data must be deleted afterward to prevent unauthorized access. The fourth requirement focuses on recovery and auditing, allowing changes to be restored through a change log and enabling tracking of who made those changes. This accountability helps deter misuse by authorized personnel. Misuse cases Misuse cases are scenarios that illustrate potential malicious interactions with a system, aiding in the identification of threats and security requirements during risk analysis. Developed for UML users, these cases complement traditional use cases by depicting attacks linked to them. While misuse cases can be included in use case diagrams, they also require detailed textual descriptions. They should be flexible to account for various types of attacks. Although misuse cases enhance the security requirements process, they do not capture all system requirements, necessitating consideration of risks from stakeholders who may not interact directly with the system. Misuse cases Misuse case descriptions Mentcare system: Transfer data Actors Medical receptionist, Patient records system (PRS) Description A receptionist can transfer data from the Mentcare system to a general patient record database managed by a health authority. This data transfer may include updated personal information, such as the patient's address and phone number, or a summary of the patient's diagnosis and treatment. Data Patient’s personal information, treatment summary Stimulus User command issued by medical receptionist Response Confirmation that PRS has been updated Comments The receptionist must have appropriate security permissions to access the patient information and the PRS. Misuse case descriptions Mentcare system: Intercept transfer (Misuse case) Actors Medical receptionist, Patient records system (PRS), Attacker Description A receptionist transfers data from his or her PC to the Mentcare system on the server. An attacker intercepts the data transfer and takes a copy of that data. Data (assets) Patient’s personal information, treatment summary Attacks - A network monitor is added to the system, and packets from the receptionist to the server are intercepted. - A spoof server is set up between the receptionist and the database server so that receptionist believes they are interacting with the real system. Misuse case descriptions Mentcare system: Intercept transfer (Misuse case) Mitigations - All networking equipment must be maintained in a locked room. Engineers accessing the equipment must be accredited. - All data transfers between the client and server must be encrypted. Certificate-based client–server communication must be used Requirements All communications between the client and the server must use the Secure Socket Layer (SSL). The https protocol uses certificate-based authentication and encryption.